En la versión de Oracle APEX 5.0 en la sección de configuración del ítem teníamos estas opciones: Tipo de Almacenamiento: Table APEX_APPLICATION_TEMP_FILES: Almacena los archivos cargados en una ubicación temporal a la que puede acceder con la vista APEX_APPLICATION_TEMP_FILES. Application Express eliminará automáticamente los archivos al final de la sesión o al final de la solicitud de subida, dependiendo de lo que seleccionemos en “ Depurar Archivo en ”. BLOB column specified in ítem Source attribute: almacena el archivo cargado en la tabla utilizada por el proceso Procesamiento Automático de Filas (DML) y la columna especificada en el atributo fuente del elemento. La columna debe ser del tipo de datos BLOB. Si se descarga el archivo, se utiliza el nombre de tabla del proceso de búsqueda automática de filas. Depurar Archivo en: End of Session End of Request Ahora en la versión de APEX 5.1 tenemos varias mejoras en el uso del elemento “Explorador de Archivos…” como la posibilidad de: Permitir Múltiples Archivos: SI / NO Tipos de Archivos: (Aquí colocamos la extensión de los archivos permitidos) De esta forma podemos configurar fácilmente nuestro elemento de tipo Explorador de Archivos. Si quieres conocer cómo funciona este elemento te aconsejo que descargues la app de ejemplo “ Sample File Upload and Download ” de las aplicaciones empaquetadas y aprendas cómo implementar éste elemento en tus aplicaciones desarroladas con APEX. Hasta Pronto!
↧
Blog Post: Nuevas mejoras en el uso del elemento de tipo "Explorador de Archivos..." en Oracle APEX 5.1
↧
Blog Post: DevOps for Docker - A Trend for 2017
Docker container has revolutionized the use of software with its modularity, platform independence, efficient use of resources, and fast installation time. Docker lends itself to the DevOps approach to software development. This article looks at the technology behind the trend: DevOps for Docker. The benefits of DevOps include: - Reduction in time-to-market - Application Updates in real-time - Reliable releases - Improved product quality - Improved Agile environment What is DevOps? Agile software development is based around adaptive software development, continuous improvement and early delivery. But one of the fallouts of Agile has been an increase in the number of releases, to the point where they can’t always be implemented reliably and quickly. In response to this situation, the objective of DevOps is to establish organizational collaboration between the various teams involved in software delivery and to automate the process of software delivery so that a new release can be tested, deployed and monitored continuously. DevOps is derived from combining Dev elopment and Op erations. It seeks to automate processes and involves collaboration between software developers, operations and quality assurance teams. Using a DevOps approach with Docker can create a continuous software delivery pipeline from GitHub code to application deployment. How Can DevOps Be Used With Docker? A Docker container is run by using a Docker image, which may be available locally or on a repository such as the Docker Hub. Suppose, as a use case, that MySQL database or some other database makes new releases available frequently with minor bug fixes or patches. How to make the new release available for end users without a time lag? Docker images are associated with code repositories--a GitHub code repo or some other repository such as AWS CodeCommit. If a developer were to build a Docker image from the GitHub code repo and make it available on Docker Hub to end users, and if the end users were to deploy the Docker image as Docker containers it would involve several individually run phases: 1. Build the GitHub Code into a Docker Image (with docker build command). 2. Test the Docker Image (with docker run command). 3. Upload the Docker Image to Docker Hub (with docker push command). 4. End user downloads the Docker image (with docker pull command). 5. End user runs a Docker container (with docker run command). 6. End user deploys an application (using AWS Elastic Beanstalk, for example). 7. End user monitors the application. The GitHub to Deployment pipeline is shown in following illustration. As a new MySQL database release becomes available with new bug fixes over a short duration (which could be a day) the complete process would need to be repeated. But a DevOps approach could be used to make the Docker image pipeline from GitHub to Deployment continuous and not require user or administrator intervention. DevOps Design Patterns “A software design pattern is a general reusable solution to a commonly occurring problem" - Wikipedia. The DevOps design patterns are centered on continuous code integration, continuous software testing, continuous software delivery, and continuous deployment. Automation, collaboration, and continuous monitoring are some of the other design patterns used in DevOps. Continuous Code Integration Continuous code integration is the process of continuously integrating source code into a new build or release. The source code could be in a local repository or on GitHub or AWS CodeCommit. Continuous Software Testing Continuous software testing is continuous testing of a new build or release. Tools such as Jenkins provide several features for testing; for instance, user input at each phase of a Jenkins Pipeline. Jenkins provides plugins such as Docker Build Step Plugin for testing each Docker application phase separately: running a Docker container, uploading a Docker image and stopping a Docker container. Continuous Software Delivery Continuous software delivery is making new software builds available to end users for deployment in production. For a Docker application, continuous delivery involves making each new version/tag of a Docker image available on a repository such as Docker Hub or an Amazon EC2 Container repository. Continuous Software Deployment Continuous software deployment involves deploying the latest release of a Docker image continuously, such that each time a new version/tag of a Docker image becomes available the Docker image gets deployed in production. Kubernetes Docker container manager already provides features such as rolling updates to update a Docker image to the latest without interruption of service. The use of Jenkins rolling updates may be automated, such that each time a new version/tag of a Docker image becomes available the Docker image deployed is updated continuously. Continuous Monitoring Continuous monitoring is the process of monitoring a running application. Tools such as Sematext can be used to monitor a Docker application. A Sematext Docker Agent can be deployed to monitor Kubernetes cluster metrics and collect logs. Automation One type of automation that can be made for Docker applications is to automate the installation of tools such as Kubernetes, whose installation is quite involved. Kubernetes 1.4 includes a new tool called “kubeadm” to automate the installation of Kubernetes on Ubuntu and CentOS. The kubeadm tool is not supported on CoreOS. Collaboration Collaboration involves cross-team work and sharing of resources. As an example, different development teams could be developing code for different versions (tags) of a Docker image on a GitHub repository and all the Docker image tags would be built and uploaded to Docker Hub simultaneously and continuously. Jenkins provides the Multibranch Pipeline project to build code from multiple branches of a repository such as the GitHub repository. Another example of collaboration involves Helm charts, pre-configured Kubernetes resources that can be used directly. For example, pre-configured Kubernetes resources for MySQL database are available as Helm charts in the kubernetes/charts repository and helm/charts repository. Helm charts eliminate the need to develop complex applications or perform complex upgrades to software that has already been developed. What are some of the DevOps Tools? Jenkins Jenkins is a commonly used automation and continuous delivery tool that can be used to build, test and deliver Docker images continuously. Jenkins provides several plugins that can be used with Docker, including Docker Plugin, Docker Build Step Plugin, and Amazon EC2 Plugin. With Amazon EC2 Plugin, a Cloud configuration can be used to provision Amazon EC2 instances dynamically for Jenkins Slave agents. Docker Plugin can be used to configure a cloud to run a Jenkins project in a Docker container. Docker Build Step Plugin is used to test a Docker image for individual phases, such as to build a Docker image, run a container, push a Docker image to Docker hub and stop and remove a Docker container. Jenkins itself may be run as a Docker container using the Docker image “jenkins”. AWS CodeCommit and AWS CodeBuild Amazon AWS provides several tools for DevOps. AWS CodeCommit is a version control service similar to GitHub to store and manage source code files. AWS CodeBuild is a DevOps tool to build and test code. The code to be built can be integrated continuously from the GitHub or CodeCommit and the output Docker image from CodeBuild can be uploaded to Docker Hub or to Amazon EC2 Container Registry as a build completes. What CodeBuild provides is an automated and continuous process for the Build, Test, Package and Deliver phases shown in the Pipeline flow diagram in this article. Alternatively, CodeBuild output could be stored in an Amazon S3 Bucket. AWS Elastic Beanstalk AWS Elastic Beanstalk is another AWS DevOps tool, used for deploying and scaling Docker applications on the cloud. AWS Beanstalk provides automatic capacity provisioning, load balancing, scaling and monitoring for Docker application deployments. A Beanstalk application and environment can be created from a Dockerfile packaged as a zip file that includes the other application resources, or just from an unpackaged Dockerfile if not including other resource files. Alternatively, the configuration for a Docker application including the Docker image and environment variables could be specified in a Dockerrun.aws.json file. An example Dockerrun.aws.json file is listed in which multiple containers are configured, one of which is for MySQL database and the other for nginx server: { "AWSEBDockerrunVersion": 2, "volumes": [ { "name": "mysql-app", "host": { "sourcePath": "/var/app/current/mysql-app" } }, { "name": "nginx-proxy-conf", "host": { "sourcePath": "/var/app/current/proxy/conf.d" } } ], "containerDefinitions": [ { "name": "mysql-app", "image": "mysql", "environment": [ { "name": "MYSQL_ROOT_PASSWORD", "value": "mysql" }, { "name": "MYSQL_ALLOW_EMPTY_PASSWORD", "value": "yes" }, { "name": "MYSQL_DATABASE", "value": "mysqldb" }, { "name": "MYSQL_PASSWORD", "value": "mysql" } ], "essential": true, "memory": 128, "mountPoints": [ { "sourceVolume": "mysql-app", "containerPath": "/var/mysql", "readOnly": true } ] }, { "name": "nginx-proxy", "image": "nginx", "essential": true, "memory": 128, "portMappings": [ { "hostPort": 80, "containerPort": 80 } ], "links": [ "mysql-app" ], "mountPoints": [ { "sourceVolume": "mysql-app", "containerPath": "/var/mysql", "readOnly": true }, { "sourceVolume": "nginx-proxy-conf", "containerPath": "/etc/nginx/conf.d", "readOnly": true } ] } ] } The Beanstalk application deployed can be monitored in a dashboard. AWS CodePipeline AWS CodePipeline is a continuous integration and continuous delivery service that can be used to build, test and deploy code every time the code is updated on the GitHub repository or CodeCommit. You can just start a CodePipeline that integrates Docker image code from a GitHub (or CodeCommit) repo, builds the code into a Docker image and tests the code using Jenkins, and deploys the Docker image to an AWS Beanstalk application deployment environment; and eliminate the need for user intervention in updates for continuous deployment of a Docker application. Every time code is updated in the source repository, the complete CodePipeline re-runs and the deployment is updated without any discontinuation in service. CodePipeline provides graphical user interfaces such as the following to monitor the progress of a CodePipeline in each of the phases: Source code integration, Build & Test, and Deployment. In this article we discussed a new trend in Docker software development called DevOps. DevOps takes Agile software development to the next level. The article discussed several new tools and best practices for Docker software use. DevOps is an emerging trend and most suitable for Docker containerized applications. Deepak Vohra has published two books on Docker: Pro Docker & Kubernetes Microservices with Docker .
↧
↧
Blog Post: Tips & Techniues - PL/SQL Object Table Function
This article covers how to write PL/SQL object table functions. PL/SQL object table functions provide you with a powerful technique to solve complex query problems. You can use object table functions to return result sets that you can’t write as a query. There is just one caveat on using object table functions. You must truly require complex logic that can’t be resolved in an ordinary query. For reference, this example really doesn’t need to be an object table function but it demonstrates tips and techniques related to them. This article teaches you how to do the following: Create an object type and collection Create an object table function Create co-dependent cursors and merge co-dependent cursors Use native dynamic SQL (NDS) to query a result set of unknown values Use bulk collection to populate a collection of object types Construct instances and a collection of instances in a SELECT -list clause You can access the results of a PL/SQL object table function from an ordinary query when you leverage Oracle’s built-in TABLE function. This means you can handle the results like columns from any table or view. A PL/SQL object table function returns a collection of a SQL object type. A SQL object type can be simple or complex. A simple object type lists a record. A record, sometimes called a struct , is a collection of fields. As a rule, the fields of a record often have different scalar data types, like the definition of a table. The fields of a record are often called members to avoid confusing them with elements of a collection. A complex SQL object type includes member methods. Complex SQL object types require special handling but object table functions return simple SQL object types. While you can create a table with a CREATE TABLE statement, you create a collection of a SQL object type in two steps. Step one creates the SQL object type. Step two creates the SQL collection. To build the PL/SQL object table function, you need to create the object type and the collection of the object type first. The following statement creates the table_struct object type: SQL> CREATE OR REPLACE 2 TYPE table_struct IS OBJECT 3 ( table_name VARCHAR2(30) 4 , column_cnt NUMBER 5 , avg_col_len NUMBER 6 , row_cnt NUMBER 7 , avg_row_len NUMBER 8 , chain_cnt NUMBER ); 9 / The table_struct example separates the CREATE OR REPLACE syntax on line 1 from the object type definition on lines 2 through 8. That’s done to highlight the importance of the semicolon on line 8. The semicolon is a statement terminator. You need a statement terminator because the object type definition is a PL/SQL statement. After the PL/SQL statement, you need a forward slash on line 9 to execute the statement. After you create the the table_struct object type, you create the table_list collection with the following statement: SQL> CREATE OR REPLACE 2 TYPE table_list IS TABLE OF table_struct; 3 / Having defined the object type and collection, you can now create an object type function. The difference between an ordinary function and an object type function is the return type. Ordinary functions return scalar data types. Pipelined table functions return a SQL result set in place of a PL/SQL record collection. Object table functions return a SQL collection. The difference between a pipelined table function and object table function is straightforward. A pipelined table function converts a PL/SQL collection of a PL/SQL record type (another type of record structure) or an embedded SQL object type into an aggregate result set. You write pipelined table functions when working with legacy PL/SQL collections. An object type function returns a SQL collection. SQL collections are aggregate result sets by default. This makes type casting the only difference between pipelined and object table functions. The SQL built-in TABLE function manages the result from the pipelined and object table functions the same way. The listing function shows you how to write an object table function. The listing function returns a composite view of data dictionary values and the dynamic count of rows from physical tables. It uses explicit cursors and Native Dynamic SQL to create a dynamic data set, which you can return by a simple query. You have two options when you define any PL/SQL function – a row-by-row or bulk processing. This article explores both because it is often easier for some developers to see the row-by-row approach before they explore the bulk processing option. As a rule, the bulk processing approach is always the more efficient solution. The row-by-row version of the listing program is: SQL> CREATE OR REPLACE 2 FUNCTION listing RETURN table_list IS 3 /* Variable list. */ 4 lv_column_cnt NUMBER; 5 lv_avg_col_len NUMBER; 6 lv_row_cnt NUMBER; 7 8 /* Declare a statement variable. */ 9 stmt VARCHAR2(200); 10 11 /* Declare a system reference cursor variable. */ 12 lv_refcursor SYS_REFCURSOR; 13 lv_table_cnt NUMBER; 14 15 /* Declare an output variable. */ 16 lv_list TABLE_LIST := table_list(); 17 18 /* Declare a table list cursor. */ 19 CURSOR c IS 20 SELECT table_name 21 , avg_row_len 22 , chain_cnt 23 FROM user_tables 24 WHERE table_name NOT IN 25 ('DEPT','EMP','APEX$_ACL','APEX$_WS_WEBPG_SECTIONS' 26 ,'APEX$_WS_ROWS','APEX$_WS_HISTORY','APEX$_WS_NOTES' 27 ,'APEX$_WS_LINKS','APEX$_WS_TAGS','APEX$_WS_FILES' 28 ,'APEX$_WS_WEBPG_SECTION_HISTORY','DEMO_USERS' 29 ,'DEMO_CUSTOMERS','DEMO_ORDERS','DEMO_PRODUCT_INFO' 30 ,'DEMO_ORDER_ITEMS','DEMO_STATES'); 31 32 /* Declare a column count. */ 33 CURSOR column_cnt 34 ( cv_table_name VARCHAR2 ) IS 35 SELECT COUNT(column_id) AS cnt_columns 36 , SUM(avg_col_len)/COUNT(column_id) AS avg_col_len 37 FROM user_tab_columns 38 WHERE table_name = cv_table_name; 39 BEGIN 40 /* Read through the data set of non-environment variables. */ 41 FOR i IN c LOOP 42 /* Count the columns of a table. */ 43 FOR j IN column_cnt(i.table_name) LOOP 44 lv_column_cnt := j.cnt_columns; 45 lv_avg_col_len := j.avg_col_len; 46 END LOOP; 47 48 /* Declare a statement. */ 49 stmt := 'SELECT COUNT(*) AS column_cnt FROM '||i.table_name; 50 51 /* Open the cursor and write set to collection. */ 52 OPEN lv_refcursor FOR stmt; 53 LOOP 54 FETCH lv_refcursor INTO lv_table_cnt; 55 EXIT WHEN lv_refcursor%NOTFOUND; 56 lv_list.EXTEND; 57 lv_list(lv_list.COUNT) := table_struct( 58 table_name => i.table_name 59 , avg_row_len => i.avg_row_len 60 , chain_cnt => i.chain_cnt 61 , column_cnt => lv_column_cnt 62 , avg_col_len => lv_avg_col_len 63 , row_cnt => lv_table_cnt); 64 END LOOP; 65 END LOOP; 66 /* Return the collection. */ 67 RETURN lv_list; 68 END; 69 / Line 2 returns the table_list collection. Lines 9, 12, and 13 support the NDS statement in the listing function. Line 16 creates an empty table_list collection by declaring the variable and instantiating an empty collection. The column_cnt cursor is a parameterized cursor, and the SELECT -list on lines 35 and 36 calculates statistics for each table processed in the nested for loop on lines 43 thru 46. The column_cnt cursor and nested loop are examples of poor coding technique . A simple join between the user_tables and user_tab_columns views eliminates both. While the row-by-row mechanic doesn’t highlight this type of inefficiency, the bulk processing approach does. Line 49 uses concatenation because NDS doesn’t let you bind table names. Line 52 opens a PL/SQL reference cursor for the dynamic statement. Line 54 fetches each row into a scalar variable. Line 55 exits the dynamic reference cursor when all rows have been read. Line 56 extends the memory of the lv_list collection and line 57 assigns an instance of the table_struct object type as an element of the collection. Line 67 returns the lv_list collection. The NDS element is also a poor coding technique because the num_rows column in the user_tables view holds the number of rows in any table. While the num_rows column is dependent on running statistics, most databases enable statistics as a background process. You should only use NDS when you truly need to do so. You can query the result and format the results in SQL*Plus or simply run the following query in SQL*Developer or Toad: SQL> SELECT * 2 FROM TABLE(listing); Line 1 uses the asterisk to return all columns in the SELECT -list. Line 2 uses the TABLE built-in function to convert the collection to a SQL result set. The program would print something like the following in SQL*Plus with appropriate formatting: Average Column Row Chain Table Name Column # Length Length Count Row # -------------------- -------- --------- ------ ------ ------ ITEM 14 14.21 197 0 93 SYSTEM_USER 11 4.36 48 0 5 ACCOUNT_LIST 8 5.13 38 0 200 RENTAL_ITEM 9 4.56 41 0 4,703 STREET_ADDRESS 8 6.50 52 0 28 CALENDAR 9 6.00 53 0 300 TELEPHONE 11 5.00 55 0 18 AIRPORT 9 6.89 61 0 6 CONTACT 10 5.30 51 0 18 TRANSACTION 12 6.58 79 0 4,694 ADDRESS 10 5.60 55 0 18 MEMBER 9 7.22 65 0 10 PRICE 11 4.36 48 0 558 RENTAL 8 5.75 46 0 4,694 COMMON_LOOKUP 10 10.30 101 0 49 15 rows selected. The refactored listing object table function uses bulk processing. As a result of the change, you remove the column_cnt cursor by joining the user_tables and user_tab_columns views. You also add the num_rows column to your base cursor. The following creates the refactored listing object table function: SQL> CREATE OR REPLACE 2 FUNCTION listing RETURN table_list IS 3 /* Variable list. */ 4 lv_column_cnt NUMBER; 5 lv_avg_col_len NUMBER; 6 lv_row_cnt NUMBER; 7 8 /* Declare a statement variable. */ 9 stmt VARCHAR2(200); 10 11 /* Declare a system reference cursor variable. */ 12 lv_refcursor SYS_REFCURSOR; 13 lv_table_cnt NUMBER; 14 15 /* Declare an output variable. */ 16 lv_list TABLE_LIST; 17 18 /* Declare a table list cursor. */ 19 CURSOR c IS 20 SELECT table_struct( 21 table_name => ut.table_name 22 , column_cnt => COUNT(utc.column_id) 23 , avg_col_len =>SUM(avg_col_len)/COUNT(column_id) 24 , avg_row_len => ut.avg_row_len 25 , chain_cnt => ut.chain_cnt 26 , row_cnt => num_rows ) 27 FROM user_tables ut INNER JOIN user_tab_columns utc 28 ON ut.table_name = utc.table_name 29 WHERE ut.table_name NOT IN 30 ('DEPT','EMP','APEX$_ACL','APEX$_WS_WEBPG_SECTIONS' 31 ,'APEX$_WS_ROWS','APEX$_WS_HISTORY','APEX$_WS_NOTES' 32 ,'APEX$_WS_LINKS','APEX$_WS_TAGS','APEX$_WS_FILES' 33 ,'APEX$_WS_WEBPG_SECTION_HISTORY','DEMO_USERS' 34 ,'DEMO_CUSTOMERS','DEMO_ORDERS','DEMO_PRODUCT_INFO' 35 ,'DEMO_ORDER_ITEMS','DEMO_STATES') 36 GROUP BY ut.table_name 37 , ut.avg_row_len 38 , ut.chain_cnt 39 , ut.num_rows; 40 41 BEGIN 42 /* Read set into collection. */ 43 OPEN c; 44 FETCH c BULK COLLECT INTO lv_list; 45 CLOSE c; 46 47 /* Return the collection. */ 48 RETURN lv_list; 49 END; 50 / Line 16 no longer initializes the lv_list collection because the BULK COLLECT assigns a complete collation to the variable. The BULK COLLECT effectively manages the memory allocation because you embed the constructor logic in the SELECT -list of the cursor on lines 20 thru 26. The new cursor includes a join between the user_tables and user_tab_columns views. It also includes reference to the num_rows column of the user_tables view. This effectively means a single bulk cursor returns everything that’s needed in the listing object table function. Line 43 opens the cursor, line 44 fetches all the rows into a collection, and line 45 closes the cursor. Line 48 returns the collection of the object type. The reality is that simple examples like this demonstrate tips and techniques but may not really benefit from those features. While you have learned the tips and techniques to write and leverage object table functions, don’t write them unless they add value to your solution space.
↧
Blog Post: Emulating a finally clause in PL/SQL
PL/SQL does not support a finally clause, as many other languages do, including Java. Here's a description of the finally block from the Java SE doc: The finally block always executes when the try block exits. This ensures that the finally block is executed even if an unexpected exception occurs. But finally is useful for more than just exception handling — it allows the programmer to avoid having cleanup code accidentally bypassed by a return, continue, or break. Putting cleanup code in a finally block is always a good practice, even when no exceptions are anticipated. The first thing to say regarding PL/SQL and finally is that the need for it in PL/SQL is likely less critical than in other languages, precisely because the PL/SQL runtime engine (and the underlying Oracle Database engine) does most of the clean up for you. Any variables you declare, cursors you open, types you define inside a block are automatically cleaned up (memory released) when that block terminates. Still, there are exceptions to this rule, including: >> Changes to tables are not automatically rolled back or committed when a block terminates. If you include an autonomous transaction pragma in your block, PL/SQL will "insist" (raise an exception at runtime) if you do not rollback or commit, but that's different. >> Elements declared at the package level have session scope. They will not be automatically cleaned up when a block in which they are used terminates. Here's a very simple demonstration of that fact. I declare a cursor at the package level, open it inside a block, "forget" to close it, and then try to open it again in another block: CREATE OR REPLACE PACKAGE serial_package AUTHID DEFINER AS CURSOR emps_cur IS SELECT * FROM employees; END serial_package; / BEGIN OPEN serial_package.emps_cur; END; / BEGIN OPEN serial_package.emps_cur; END; / BEGIN OPEN serial_package.emps_cur; END; / ORA-06511: PL/SQL: cursor already open ORA-06512: at "STEVEN.SERIAL_PACKAGE", line 5 ORA-06512: at line 2 Try it out yourself in LiveSQL . Since there is no finally clause, you have to take care of things yourself. The best way to do this - and I am not claiming it is optimal - is to create a nested cleanup procedure and invoke that as needed. Here we go - no more error when I attempt to open the cursor the second time. CREATE OR REPLACE PACKAGE serial_package AUTHID DEFINER AS CURSOR emps_cur IS SELECT * FROM employees; END serial_package; / CREATE OR REPLACE PROCEDURE use_packaged_cursor AUTHID DEFINER IS PROCEDURE cleanup IS BEGIN /* If called from exception section log the error */ IF SQLCODE <> 0 THEN /* Uses open source Logger utility: https://github.com/OraOpenSource/Logger */ logger.log_error ('use_packaged_cursor'); END IF; IF serial_package.emps_cur%ISOPEN THEN CLOSE serial_package.emps_cur; END IF; END cleanup; BEGIN OPEN serial_package.emps_cur; cleanup ; EXCEPTION WHEN NO_DATA_FOUND THEN /* Clean up but do not re-raise (just to show that you might want different behaviors for different exceptions). */ cleanup ; WHEN OTHERS THEN cleanup ; RAISE; END; / BEGIN use_packaged_cursor; END; / PL/SQL procedure successfully completed. BEGIN use_packaged_cursor; END; / PL/SQL procedure successfully completed. (also available in LiveSQL ) Now, I am not , repeat NOT, claiming that this is as good as having a finally clause. I am just saying: this is how you can (have to) achieve a similar effect.
↧
Blog Post: Players for PL/SQL Challenge Championship for 2016
The following players will be invited to participate in the PL/SQL Challenge Championship for 2016, currently scheduled to take place on 23 March at 14:00 UTC. The number in parentheses after their names are the number of championships in which they have already participated. Congratulations to all listed below on their accomplishment and best of luck in the upcoming competition! Name Rank Qualification Country SteliosVlasopoulos (13) 1 Top 50 Belgium siimkask (16) 2 Top 50 Estonia mentzel.iudith (16) 3 Top 50 Israel li_bao (4) 4 Top 50 Russia James Su (11) 5 Top 50 Canada ivan_blanarik (10) 6 Top 50 Slovakia NielsHecker (17) 7 Top 50 Germany Rakesh Dadhich (8) 8 Top 50 India Karel_Prech (6) 9 Top 50 No Country Set Marek Sobierajski (1) 10 Top 50 Poland Rytis Budreika (4) 11 Top 50 Lithuania _tiki_4_ (9) 12 Top 50 Germany krzysioh (5) 13 Top 50 Poland Chad Lee (13) 14 Top 50 United States João Borges Barreto (6) 15 Top 50 Portugal Andrey Zaytsev (5) 16 Top 50 Russia coba (1) 17 Top 50 Netherlands patch72 (3) 18 Top 50 Netherlands Kuvardin Evgeniy (2) 19 Top 50 Russia VictorD (3) 20 Top 50 Russia Vyacheslav Stepanov (15) 21 Top 50 No Country Set Maxim Borunov (3) 22 Top 50 Russia tonyC (2) 23 Top 50 United Kingdom JustinCave (13) 24 Top 50 United States Chase (2) 25 Top 50 Canada Joaquin_Gonzalez (10) 26 Top 50 Spain Pavel_Noga (4) 27 Top 50 Czech Republic seanm95 (3) 28 Top 50 United States syukhno (0) 29 Top 50 Ukraine tonywinn (5) 30 Top 50 Australia JasonC (1) 31 Top 50 United Kingdom Andrii Dorofeiev (0) 32 Top 50 Ukraine Sachi (1) 33 Top 50 India ratte2k4 (0) 34 Top 50 Germany Alexey Ponomarenko (1) 35 Top 50 No Country Set PZOL (2) 36 Top 50 Hungary Otto Palenicek (0) 37 Top 50 Germany Jānis Baiža (10) 38 Top 50 Latvia JeroenR (10) 39 Top 50 Netherlands Rimantas Adomauskas (3) 40 Top 50 Lithuania Henry_A (3) 41 Top 50 Czech Republic Sherry (2) 42 Top 50 Czech Republic ted (0) 43 Top 50 United Kingdom MarkM. (0) 44 Top 50 Germany YuanT (11) 45 Top 50 United States kbentley1 (1) 46 Top 50 United States swesley_perth (2) 47 Top 50 Australia Talebian (3) 48 Top 50 Netherlands mcelaya (1) 49 Top 50 Spain berkeso (0) 50 Top 50 Hungary
↧
↧
Blog Post: What is Basic Flashback Data Archive?
This is an interesting tidbit. The Database Licensing Guide here makes the following statement: “ Basic Flashback Data Archive is in all editions. Optimization for Flashback Data Archive requires EE and the Advanced Compression option.” As a result, the “ Optimization for Flashback Data Archive ” feature is seen as only available in the Enterprise Editiion in the table of features/options and editions in the licensing guide. However, what exactly is the Basic Flashback Data Archive , that is available in all editions including SE1, SE and EE? The interesting thing is that what is known as “Basic Flashback Data Archive”, actually includes all functionality of Flashback Data Archive; except for compression of the history tables that are managed by Flashback Data Archive. For this compression, simply specify OPTIMIZE DATA when creating or altering a Flashback Data Archive. This will enable optimization of data storage for the history tables that are maintained by Flashback Data Archive. The default is NO OPTIMIZE DATA, which signifies Basic Flashback Data Archive. OPTIMIZE DATA optimizes the storage of data in history tables by using features such as Advanced Row or LOB Compression, or Advanced LOB Deduplication, or Segment-level/Row-level compression tiering. To see more information for Flashback Data Archive, you can refer to the documentation here .
↧
Blog Post: Actualizando nuestro entorno de desarrollo local de APEX 5.0 a APEX 5.1
En este artículo quiero mostrarte cómo podemos actualizar nuestro entorno de desarrollo local de Apex 5.0 a Apex 5.1. En la esquina inferior derecha de la página de Inicio de APEX podemos ver la versión que tenemos corriendo, en mi caso la versión 5.0.3.00.03. Lo primero para hacer es descargar el archivo zip (todos los lenguajes) desde la siguiente URL: http://www.oracle.com/technetwork/developer-tools/apex/downloads/index.html Necesitamos estar registrados en la página de Oracle y aceptar los términos de la licencia para poder descargar APEX. Guardamos el zip apex_5.1.zip y lo descomprimimos por ejemplo en mi caso en el disco local C:, reemplazamos el nombre apex_5.1 por apex . Nota: Como ya tengo una versión anterior de apex, lo que he hecho es cambiar el nombre de la carpeta apex a apex50 ya que en esa carpeta están los archivos de la versión actual de mi APEX. Abrimos una ventana de comandos del CMD (en el caso de Windows) Nos cambiamos al directorio donde tenemos la carpeta de apex --> C:\apex Abrimos el SQLPlus En Windows: SYSTEM_DRIVE:\ sqlplus /nolog SQL> CONNECT SYS as SYSDBA Enter password: SYS_password On UNIX and Linux: $ sqlplus /nolog SQL> CONNECT SYS as SYSDBA Enter password: SYS_password Ejecutamos el script de instalación: @apexins.sql APEX APEX TEMP /i/ (Colocar el tablespace que corresponda a su instalación) Una vez finalizado el script se saldrá del SQLPlus. Como es un upgrate no voy a actualizar la password de la instancia de administración de APEX. Volvemos a ingresar al SQLPLus con las credenciales del usuario SYS. En mi caso estoy usando la configuración del PL/SQL Gateway Embedded por ello necesitamos ejecutar el siguiente script para actualizar el directorio de imagenes: SQL> @apex_epg_config.sql C:\ Nota: como tenemos nuestra carpeta en C:/apex es por ello que ingresamos solo C:\, en el caso que hayamos descomprimido la carpeta zip en C:\temp\apex, debiéramos indicar la ruta C:\tem. En el caso que tengamos bloqueado el usuario Anonymous lo desbloqueamos con la siguiente sentencia: SQL> ALTER USER ANONYMOUS ACCOUNT UNLOCK; Habilitamos el Network Services (ACL) para conceder acceso a cualquier host para APEX_050100. IMPORTANTE : En la documentación de Oracle el nombre del esquema está en minúscula, como lo vemos resaltado en amarillo. DECLARE ACL_PATH VARCHAR2(4000); BEGIN -- Look for the ACL currently assigned to '*' and give apex_050100 -- the "connect" privilege if apex_050100 does not have the privilege yet. SELECT ACL INTO ACL_PATH FROM DBA_NETWORK_ACLS WHERE HOST = '*' AND LOWER_PORT IS NULL AND UPPER_PORT IS NULL; IF DBMS_NETWORK_ACL_ADMIN.CHECK_PRIVILEGE(ACL_PATH, 'apex_050100' , 'connect') IS NULL THEN DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE(ACL_PATH, 'apex_050100' , TRUE, 'connect'); END IF; EXCEPTION -- When no ACL has been assigned to '*'. WHEN NO_DATA_FOUND THEN DBMS_NETWORK_ACL_ADMIN.CREATE_ACL('power_users.xml', 'ACL that lets power users to connect to everywhere', 'apex_050100' , TRUE, 'connect'); DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL('power_users.xml','*'); END; / COMMIT; Si ejecutamos el script en el SQLPlus, tendremos un error de que el usuario no existe: 01435. 00000 - "user does not exist" Esto parece ser un bug en la documentación de esta versión, y para solucionarlo lo que necesitamos hacer es colocar el nombre del esquema (usuario) todo en mayúsculas, quedando el script como vemos a continuación: DECLARE ACL_PATH VARCHAR2(4000); BEGIN -- Look for the ACL currently assigned to '*' and give apex_050100 -- the "connect" privilege if apex_050100 does not have the privilege yet. SELECT ACL INTO ACL_PATH FROM DBA_NETWORK_ACLS WHERE HOST = '*' AND LOWER_PORT IS NULL AND UPPER_PORT IS NULL; IF DBMS_NETWORK_ACL_ADMIN.CHECK_PRIVILEGE(ACL_PATH, 'APEX_050100' , 'connect') IS NULL THEN DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE(ACL_PATH, 'APEX_050100' , TRUE, 'connect'); END IF; EXCEPTION -- When no ACL has been assigned to '*'. WHEN NO_DATA_FOUND THEN DBMS_NETWORK_ACL_ADMIN.CREATE_ACL('power_users.xml', 'ACL that lets power users to connect to everywhere', 'APEX_050100' , TRUE, 'connect'); DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL('power_users.xml','*'); END; / COMMIT; De esta forma el procedimiento se completará con éxito y sin errores. Este bug en la documentación de Oracle ya ha sido reportado por Rich Soule . Ahora es momento de instalar APEX en Español Desde una ventana de comandos configuramos la variable NLS_LANG C:\> set NLS_LANG=American_America.AL32UTF8 Nos dirigimos al directorio: apex/builder/es y abrimos el SQLPlus Ejecutamos el siguiente script para convertir nuestro APEX a Español: SQL> @load_es.sql Una vez finalizado el script, cerramos la ventana CMD y abrimos un navegador web. En la barra de drecciones del navegador, escribimos: http://localhost:8080/apex Se verá la página de inicio de sesión: Ingresamos las credenciales de nuestro Espacio de Trabajo, usuario y contraseña y de este modo ingresamos a la página de inicio de APEX. Ahora podemos ver en el margen inferior derecho la nueva versión que estamos usando: Application Express 5.1.0.00.45 De esta forma muy sencilla hemos actualizado nuestro entorno de desarrollo local. Hasta Pronto!
↧
Blog Post: EMCLI: Add a Database Target with Specific DB System Name
On Oracle Community forum, I’ve seen a good question about using EMCLI to add targets. The forum user says that they decided to name the database targets with combining db_name and hostname. As you may know, when you add a database target, EM also creates a database system for the new target (or assign it with an existing one). The new database system’s name is generated by adding “_sys” to the database target name. Let’s we add a database target named TEST, EM will create a database system as TEST_sys. If we name our database target as “TEST_test.gokhanatil.com”, EM will create a database system named “TEST_test.gokhanatil.com_sys”. In my personal opinion, Enterprise Manager provides enough flexibility to report these information so I wouldn’t use this kind of naming system but that’s not the point. As you see, it works well for stand alone databases but when you add a standby database to this system, it becomes confusing. For example, your TESTSTBY_stby.gokhanatil.com will be part of TEST_test.gokhanatil.com_sys. So the forum user asks if we can give a specific name to the database system instead of default naming. For example, our database system will be named as TEST_sys although its members are TEST_test.gokhanatil.com and TESTSTBY_stby.gokhanatil.com. If you use the web console, it’s possible to set a name for the database system, but as I see, there’s no option or parameter for it on emcli. So my solution is using a “hidden gem” in emcli verbs: rename_target. Let’s say we add a target databases like this: ./emcli add_target -name="TEST_test.gokhanatil.com" \ -type="oracle_database" -host="test.gokhanatil.com" \ -prop="SID:TESTERE;MachineName:test.gokhanatil.com" \ -prop="OracleHome:/u01/OracleHomes/db/product/dbhome_1;Port:1521" \ -credentials="UserName:dbsnmp;password:welcome1;Role:normal" Wait for a while and check if the database system is created: ./emcli get_targets -targets="TEST_test.gokhanatil.com%:%" Status Status Target Type Target Name ID 1 Up oracle_database TEST_test.gokhanatil.com 1 Up oracle_dbsys TEST_test.gokhanatil.com_sys Now let’s rename the new database system and check the result: ./emcli rename_target -target_name="TEST_test.gokhanatil.com_sys" \ -target_type="oracle_dbsys" -new_target_name="TEST_sys" Target TEST_test.gokhanatil.com_sys successfully renamed to TEST_sys. ./emcli get_targets -targets="TEST%:%" Status Status Target Type Target Name ID 1 Up oracle_database TEST_test.gokhanatil.com 1 Up oracle_dbsys TEST_sys When we add the standby database of TEST_test.gokhanatil.com, Enterprise Manager will assign it to TEST_sys system. I hope this method solves the problem. We couldn’t give a specific name to the database system when adding the database target because there’s no (documented) parameter for it. So as a workaround, we renamed it after it’s created. Related Posts How to Download New Agent Software for Oracle Cloud Control 12c in Offline Mode Win A Free Copy of Packt’s Oracle Enterprise Manager 12c Administration Cookbook e-book How to Install Oracle Enterprise Manager Cloud Control 12c Oracle Enterprise Manager Cloud Control 12c Plugin for PostgreSQL How to Download New Agent Software for Oracle Cloud Control 12c
↧
Comment on Rules Engine
Thanks a lot for sharing! In order to get things compiled, I did the following: - Add some new lines containing a slash only (after the creation of the two types RowData and ResultSet, where I deleted the semicolons in turn, and twice after "END MY_RULESENGINE;") - Remove "PMTS_PSH_OWNER." I haven't had an opportunity yet to give the code a try. Btw, I recently implemented a simple (and probably less capable) rule engine in PL/SQL, too.
↧
↧
Blog Post: DBMS_REDEFINITION to create partition for existing table
to do this oracle provide you with package called DBMS_REDEFINITION for more information about it her e In this post i will show you how to partition existing table for SIEBEL application which is my case holding huge number of records. Create the same structure for the original table but without any constraint just columns like the below with new name and sure choose the partition you want to use :- CREATE TABLE SIEBEL.S_CASE_NEW ( ROW_ID VARCHAR2(15 CHAR) , CREATED DATE DEFAULT sysdate , CREATED_BY VARCHAR2(15 CHAR) , LAST_UPD DATE DEFAULT sysdate , LAST_UPD_BY VARCHAR2(15 CHAR) , MODIFICATION_NUM NUMBER(10) DEFAULT 0 , CONFLICT_ID VARCHAR2(15 CHAR) DEFAULT '0' , ASGN_USR_EXCLD_FLG CHAR(1 CHAR) DEFAULT 'N' , BU_ID VARCHAR2(15 CHAR) DEFAULT '0-R9NH' , CASE_DT DATE , CASE_NUM VARCHAR2(100 CHAR) , CHGOFCCM_REQ_FLG CHAR(1 CHAR) DEFAULT 'N' , CLASS_CD VARCHAR2(30 CHAR) , LOCAL_SEQ_NUM NUMBER(10) DEFAULT 1 , NAME VARCHAR2(100 CHAR) , PR_REP_DNRM_FLG CHAR(1 CHAR) DEFAULT 'N' , PR_REP_MANL_FLG CHAR(1 CHAR) DEFAULT 'N' , PR_REP_SYS_FLG CHAR(1 CHAR) DEFAULT 'N' , STATUS_CD VARCHAR2(30 CHAR) , ASGN_DT DATE, CLOSED_DT DATE, CURR_APPR_SEQ_NUM NUMBER(10), DB_LAST_UPD DATE, REWARD_AMT NUMBER(22,7), REWARD_EXCH_DATE DATE, APPLICANT_ID VARCHAR2(15 CHAR), APPR_TEMP_ID VARCHAR2(15 CHAR), AUDIT_EMP_ID VARCHAR2(15 CHAR), CATEGORY_TYPE_CD VARCHAR2(30 CHAR), CITY VARCHAR2(50 CHAR), COUNTRY VARCHAR2(30 CHAR), CRIME_SUB_TYPE_CD VARCHAR2(30 CHAR), CRIME_TYPE_CD VARCHAR2(30 CHAR), DB_LAST_UPD_SRC VARCHAR2(50 CHAR), DESC_TEXT VARCHAR2(2000 CHAR), MSTR_CASE_ID VARCHAR2(15 CHAR), ORG_GROUP_ID VARCHAR2(15 CHAR), PAR_CASE_ID VARCHAR2(15 CHAR), PRIORITY_TYPE_CD VARCHAR2(30 CHAR), PR_AGENCY_ID VARCHAR2(15 CHAR), PR_AGENT_ID VARCHAR2(15 CHAR), PR_DISEASE_ID VARCHAR2(15 CHAR), PR_POSTN_ID VARCHAR2(15 CHAR), PR_PROD_INT_ID VARCHAR2(15 CHAR), PR_PRTNR_ID VARCHAR2(15 CHAR), PR_SGROUP_ID VARCHAR2(15 CHAR), PR_SUBJECT_ID VARCHAR2(15 CHAR), PR_SUSPCT_ID VARCHAR2(15 CHAR), PS_APPL_ID VARCHAR2(15 CHAR), REWARD_CURCY_CD VARCHAR2(20 CHAR), SERIAL_NUM VARCHAR2(100 CHAR), SOURCE_CD VARCHAR2(30 CHAR), STAGE_CD VARCHAR2(30 CHAR), STATE VARCHAR2(10 CHAR), SUBJECT_NAME VARCHAR2(100 CHAR), SUBJECT_PH_NUM VARCHAR2(40 CHAR), SUB_STATUS_CD VARCHAR2(30 CHAR), SUB_TYPE_CD VARCHAR2(30 CHAR), TERRITORY_TYPE_CD VARCHAR2(30 CHAR), THREAT_LVL_CD VARCHAR2(30 CHAR), TYPE_CD VARCHAR2(30 CHAR), X_APP_BIRTH_DATE DATE, X_APP_BIRTH_DT_HIJRI VARCHAR2(10 CHAR), X_APP_FATHER_NAME_A VARCHAR2(50 CHAR), X_APP_FATHER_NAME_E VARCHAR2(50 CHAR), X_APP_FAX VARCHAR2(15 CHAR), X_APP_FIRST_NAME_A VARCHAR2(50 CHAR), X_APP_FIRST_NAME_E VARCHAR2(50 CHAR), X_APP_FULL_NAME VARCHAR2(100 CHAR), X_APP_GENDER VARCHAR2(30 CHAR), X_APP_GFATHER_NAME_A VARCHAR2(50 CHAR), X_APP_GFATHER_NAME_E VARCHAR2(50 CHAR), X_APP_LAST_NAME_A VARCHAR2(50 CHAR), X_APP_LAST_NAME_E VARCHAR2(50 CHAR), X_APP_MAIL VARCHAR2(50 CHAR), X_APP_MOBILE VARCHAR2(15 CHAR), X_APP_MOTHER_F_NAME_A VARCHAR2(50 CHAR), X_APP_MOTHER_F_NAME_E VARCHAR2(50 CHAR), X_APP_MOTHER_L_NAME_A VARCHAR2(50 CHAR), X_APP_MOTHER_L_NAME_E VARCHAR2(50 CHAR), X_APP_TYPE VARCHAR2(30 CHAR), X_APPLICANT_CLASSIFICATION VARCHAR2(30 CHAR), X_APPLICANT_NAT_ID_NO VARCHAR2(15 CHAR), X_APPLICANT_TITLE VARCHAR2(30 CHAR), X_APPLICANT_TYPE VARCHAR2(50 CHAR), X_ATTACHMENT_FLG VARCHAR2(5 CHAR), X_CANCEL_DESC VARCHAR2(300 CHAR), X_CANCEL_REASON VARCHAR2(30 CHAR), X_CASE_COPY_FLG VARCHAR2(30 CHAR), X_CASE_HIJRI_DATE VARCHAR2(30 CHAR), X_CHECK_EXISTS_FKG VARCHAR2(15 CHAR), X_CHECK_EXISTS_FLG VARCHAR2(30 CHAR), X_COMMERCIAL_NAME VARCHAR2(300 CHAR), X_COMMERCIAL_REG_NO VARCHAR2(40 CHAR), X_COPY_SERIAL_NUM VARCHAR2(100 CHAR), X_CREATED_DATE_HEJRI VARCHAR2(30 CHAR), X_CREATED_GRG VARCHAR2(30 CHAR), X_CREATED_HIJRI VARCHAR2(10 CHAR), X_CRED_EXP_DT_HIJRI VARCHAR2(10 CHAR), X_CRED_EXPIRY_DATE DATE, X_CRED_ISSUE_DATE DATE, X_CRED_ISSUE_DT_HIJRI VARCHAR2(10 CHAR), X_CRED_NO VARCHAR2(30 CHAR), X_CRED_TYPE VARCHAR2(30 CHAR), X_CRS_NO VARCHAR2(15 CHAR), X_DLV_DATE DATE, X_DLV_DATE_HIJRI VARCHAR2(10 CHAR), X_DLV_USER_ID VARCHAR2(15 CHAR), X_DOCUMENT_SORUCE VARCHAR2(30 CHAR), X_EST_OWNERSHIP_TYPE VARCHAR2(30 CHAR), X_EST_TYPE VARCHAR2(30 CHAR), X_GIS_DATA_LOAD VARCHAR2(15 CHAR), X_GIS_DATA_STATUS VARCHAR2(10 CHAR), X_INV_TYPE VARCHAR2(30 CHAR), X_IS_UPLOADED VARCHAR2(30 CHAR), X_LAND_ORG_TYPE VARCHAR2(30 CHAR), X_LAND_STATUS VARCHAR2(30 CHAR), X_LAND_TYPE VARCHAR2(30 CHAR), X_MUNICIPAL_NAME VARCHAR2(30 CHAR), X_NATIONALITY VARCHAR2(30 CHAR), X_ORG_END_REG_DATE DATE, X_ORG_END_REG_HIJRI_DATE VARCHAR2(10 CHAR), X_ORG_REGISTRATION_DATE DATE, X_ORG_REGISTRATION_HIJRI_DATE VARCHAR2(10 CHAR), X_ORGANIZATION_NAME VARCHAR2(200 CHAR), X_ORIGINAL_ORG_ID VARCHAR2(15 CHAR), X_PAPER_FLG CHAR(1 CHAR) DEFAULT 'N', X_PAYMENT_DT DATE, X_PAYMENT_FLAG VARCHAR2(30 CHAR), X_PAYMENT_NO VARCHAR2(10 CHAR), X_PR_EMP_ID VARCHAR2(15 CHAR), X_PR_ENG_OFFICE_ID VARCHAR2(15 CHAR), X_PROC_DESC VARCHAR2(100 CHAR), X_PROC_FLG VARCHAR2(30 CHAR), X_PROC_TYPE VARCHAR2(30 CHAR), X_PROXY_ISSUE_AUTHORITY VARCHAR2(30 CHAR), X_PROXY_NO VARCHAR2(10 CHAR), X_QX_CRED_EXP_DT_HIJRI DATE, X_REGISTRATION_DATE DATE, X_REGISTRATION_HIJRI_DATE VARCHAR2(10 CHAR), X_REGISTRATION_NO VARCHAR2(30 CHAR), X_REJECT_DESC VARCHAR2(300 CHAR), X_REJECT_REASON VARCHAR2(30 CHAR), X_RELATION_TYPE VARCHAR2(30 CHAR), X_RETURN_DATE DATE, X_RETURN_DATE_HIJRI VARCHAR2(10 CHAR), X_RETURN_NOTES VARCHAR2(100 CHAR), X_RETURN_REASON VARCHAR2(30 CHAR), X_SCHEMA_STATUS VARCHAR2(30 CHAR), X_SECURITY_FLG VARCHAR2(30 CHAR), X_SELECT_FLG CHAR(1 CHAR) DEFAULT 'N', X_STRIPPED_FIRST_NAME VARCHAR2(50 CHAR), X_STRIPPED_FULL_NAME VARCHAR2(200 CHAR), X_STRIPPED_LAST_NAME VARCHAR2(50 CHAR), X_STRIPPED_MOTHER_FIRST VARCHAR2(50 CHAR), X_STRIPPED_MOTHER_FULLNAME VARCHAR2(50 CHAR), X_STRIPPED_MOTHER_LASTNAME VARCHAR2(50 CHAR), X_STRIPPED_SECOND_NAME VARCHAR2(50 CHAR), X_STRIPPED_THIRD_NAME VARCHAR2(50 CHAR), X_TO_BU_ID VARCHAR2(15 CHAR), X_UPDATED_GRG VARCHAR2(30 CHAR), X_UPDATED_HIJRI VARCHAR2(10 CHAR), COR_TYPE VARCHAR2(30 CHAR), CORR_CAS_CAT VARCHAR2(30 CHAR), PRIMARY_EMPLOYEE VARCHAR2(30 CHAR), SUBMIT_TO_STATUS VARCHAR2(15 CHAR), X_DOCUMENT_TYPE VARCHAR2(30 CHAR), X_SURVEYOR_NAME VARCHAR2(30 CHAR), X_OLD_STATUS VARCHAR2(30 CHAR), X_APPLICANT_ORG_ID VARCHAR2(15 CHAR), X_APPLICANT_ROW_ID VARCHAR2(15 CHAR), X_GIS_TOKEN VARCHAR2(100 CHAR), X_NEW_PERMIT_FLG CHAR(1 CHAR) DEFAULT 'Y', X_TRANSACTION_MOD VARCHAR2(30 CHAR), X_TRANSACTION_STATUS VARCHAR2(30 CHAR), X_CASE_CAT VARCHAR2(100 CHAR), X_CASE_COPY_SERIAL NUMBER(10), X_GIS_PARAMETER VARCHAR2(300 CHAR), X_GIS_ROWIDS VARCHAR2(100 CHAR), READING_FLAG CHAR(1 CHAR) DEFAULT 'N', X_GIS_MUNICIPAL VARCHAR2(30 CHAR), X_ORG_DELEGATE_NAME VARCHAR2(200 CHAR), X_PR_POS_ORG_ID VARCHAR2(15 CHAR), X_ORGANIZATION_STRIPPED_NAME VARCHAR2(200 CHAR), X_CITIZEN_REVIEW CHAR(1 CHAR), X_ALLOWED_USAGE VARCHAR2(50 CHAR), X_AUTHORIZATION_DATE DATE, X_AUTHORIZATION_HIJRI_DATE VARCHAR2(10 CHAR), X_AUTHORIZATION_NO VARCHAR2(20 CHAR), X_CIVIL_APPROVAL_DATE DATE, X_CIVIL_APPROVAL_HIJRI_DATE VARCHAR2(10 CHAR), X_CIVIL_APPROVAL_NO VARCHAR2(20 CHAR), X_CIVIL_OFFICE VARCHAR2(25 CHAR), X_NEW_MUNICIPAL_NAME VARCHAR2(30 CHAR), X_OLD_MUNICIPAL_NAME VARCHAR2(30 CHAR), X_OLD_STATUS_CD VARCHAR2(30 CHAR), X_RESTRICT_NUM VARCHAR2(30 CHAR), X_LAST_UPD_HIJRI VARCHAR2(10 CHAR), X_APP_BIRTH_DATE_HIJRI VARCHAR2(10 CHAR), X_FEES_EXCEPTION VARCHAR2(30 CHAR), X_FINCL_NAME VARCHAR2(30 CHAR), X_IS_OWNER VARCHAR2(15 CHAR), X_MANAGER_ID VARCHAR2(15 CHAR), X_OWNERSHIP_TYPE VARCHAR2(30 CHAR), X_PRINT_FLG CHAR(1 CHAR) DEFAULT 'N', X_PRNT_FLG VARCHAR2(5 CHAR), X_SECRETARY_ID VARCHAR2(15 CHAR), X_UNDER_SECRETARY_ID VARCHAR2(15 CHAR), X_VIEW_SEQUENCE NUMBER(10) DEFAULT 0, X_REGULATION_ACCPTNCE_FLG VARCHAR2(7 CHAR), X_CONFIRM_FLAG VARCHAR2(7 CHAR), X_OCCUPATION_IN_RESIDENCE VARCHAR2(30 CHAR), X_ROWNUM NUMBER(10), X_NUMBER_ARCHIVAL VARCHAR2(15 CHAR), X_ATTACHMENT_PARAMETERS VARCHAR2(500 CHAR), X_ATTACHMENT_ROW_IDS VARCHAR2(500 CHAR), X_BP_ID VARCHAR2(15 CHAR), X_CONTROL_SUB_TYPE VARCHAR2(30 CHAR), X_CONTROL_TYPE VARCHAR2(30 CHAR), X_DEPOSIT_ID VARCHAR2(15 CHAR), X_MALL_ID VARCHAR2(15 CHAR), X_NEW_DEPOSIT NUMBER(10), X_SOURCE_ID VARCHAR2(15 CHAR), X_STORE_ID VARCHAR2(15 CHAR), APPEALED_FLG CHAR(1 CHAR) DEFAULT 'N' , CHANGED_FLG CHAR(1 CHAR) DEFAULT 'N' , EVAL_ASSESS_ID VARCHAR2(15 CHAR), X_ARCHIEVING_TYPE VARCHAR2(30 CHAR), X_ACTIVITY_ID VARCHAR2(15 CHAR), X_OLD_SERIAL_NUM VARCHAR2(20 CHAR), X_OWNER_ORG_POSTN_ID VARCHAR2(15 CHAR), X_PR_CNTR_POSTN_ID VARCHAR2(15 CHAR), X_REPORT_URL VARCHAR2(500 CHAR), X_VIOLATION_ID VARCHAR2(15 CHAR), X_APPLICANT_SOURCE VARCHAR2(50 CHAR) ) PARTITION BY RANGE (created) (PARTITION S_CASE_2015 VALUES LESS THAN (TO_DATE('01/01/2016', 'DD/MM/YYYY')), PARTITION S_CASE_2016 VALUES LESS THAN (TO_DATE('01/01/2017', 'DD/MM/YYYY')), PARTITION S_CASE_2017 VALUES LESS THAN (MAXVALUE)); Now we should start the redefinition by running the following package BEGIN DBMS_REDEFINITION.start_redef_table( uname => 'SIEBEL', orig_table => 'S_CASE', int_table => 'S_CASE_NEW'); END; / Sync the both tables together BEGIN dbms_redefinition.sync_interim_table( uname => 'SIEBEL', orig_table => 'S_CASE', int_table => 'S_CASE_NEW'); END; / After Running the both package above run the below scripts but run it from the server side, because it's takes times and to avoid any interruption SET SERVEROUTPUT ON DECLARE l_errors NUMBER; BEGIN DBMS_REDEFINITION.copy_table_dependents( uname => 'SIEBEL', orig_table => 'S_CASE', int_table => 'S_CASE_NEW', copy_indexes => DBMS_REDEFINITION.cons_orig_params, copy_triggers => TRUE, copy_constraints => TRUE, copy_privileges => TRUE, ignore_errors => FALSE, num_errors => l_errors, copy_statistics => FALSE, copy_mvlog => FALSE); DBMS_OUTPUT.put_line('Errors=' || l_errors); END; / Finish the redefinition BEGIN dbms_redefinition.finish_redef_table( uname => 'SIEBEL', orig_table => 'S_CASE', int_table => 'S_CASE_NEW'); END; / After finishing everything successfully, just drop the new table because now it's became the old table DROP TABLE S_CASE_NEW; Run the below query to see if the partition has been successfully created SELECT partitioned FROM dba_tables WHERE table_name = 'S_CASE'; Thanks Osama
↧
Comment on Rules Engine
There are some more glitches in the test code as well as in the engine itself. Here is a corrected version of the test code, making the engine start and actually find some rules. However, the engine exits very early with an exception which I haven't debugged yet. /** DMLs ***/ INSERT INTO MY_LOG_PROPERTIES (LOGGER, LOGLEVEL, CREATEDBY, CREATEDDT, UPDATEDDT, UPDATEDBY) VALUES ('*', 'ON', 'Agilan', SYSDATE, SYSDATE, 'Agilan'); INSERT INTO MY_LOG_PROPERTIES (LOGGER, LOGLEVEL, CREATEDBY, CREATEDDT, UPDATEDDT, UPDATEDBY) VALUES ('MY_RULESENGINE', 'DEB', 'Agilan', SYSDATE, SYSDATE, 'Agilan'); --------------- INSERT INTO MY_RULE_EXE_CONFIG (ID, APPID, RULESET_ID, BREAK_CONDN_BEF_AFT, BREAKING_RULEID, UPDDT) VALUES (MY_exe_rule_config_seq.NEXTVAL, 'YourApplicationID', 'YourRuleSetId', 'AFT', 'breakOnFailure', SYSDATE); INSERT INTO MY_RULE_BINDVARIABLES (APPID, VARIABLENAME, BIND_QUERY, INC_TYPE, CACHE_WITHIN_EXEC, UPDDT) VALUES ( 'YourApplicationID', '$v.input_param1_from_caller$', 'SELECT value FROM MY_RE_INPUT_PARAM WHERE EXEID=$p.pExecId$ AND upper(KEY)=''PARAMETERNAME_1''', 'EQ', 'N', SYSDATE); INSERT INTO MY_RULE_BINDVARIABLES (APPID, VARIABLENAME, BIND_QUERY, INC_TYPE, CACHE_WITHIN_EXEC, UPDDT) VALUES ( 'YourApplicationID', '$v.input_param2_list_from_caller$', 'SELECT value FROM MY_RE_INPUT_PARAM WHERE EXEID=$p.pExecId$ AND upper(KEY)=''PARAMETERNAME_2_LIST''', 'IN', 'N', SYSDATE); INSERT INTO MY_RULE_BINDVARIABLES (APPID, VARIABLENAME, BIND_QUERY, INC_TYPE, CACHE_WITHIN_EXEC, UPDDT) VALUES ( 'YourApplicationID', '$v.current_date$', 'SELECT sysdate from dual', 'EQ', 'Y', SYSDATE); INSERT INTO MY_RULE_BINDVARIABLES (APPID, VARIABLENAME, BIND_QUERY, INC_TYPE, CACHE_WITHIN_EXEC, UPDDT) VALUES ( 'YourApplicationID', '$v.somevariable1$', 'SELECT ''data'' from dual', 'EQ', 'N', SYSDATE); INSERT INTO MY_RULE (ID, APPID, RULE_ID, RULESET_ID, RULE_QUERY, UPDDT) VALUES (MY_rule_seq.NEXTVAL, 'YourApplicationID', 'breakOnFailure', 'YourRuleSetId', 'select ''1'' neverbreak from dual where 1=2 ', SYSDATE); INSERT INTO MY_RULE (ID, APPID, RULE_ID, RULESET_ID, RULE_QUERY, UPDDT) VALUES ( MY_rule_seq.NEXTVAL, 'YourApplicationID', 'MyRuleQuery1', 'YourRuleSetId', ' select col1,col2,col3 from some_table where some_col = $v.somevariable1$', SYSDATE); INSERT INTO MY_RULE (ID, APPID, RULE_ID, RULESET_ID, RULE_QUERY, UPDDT) VALUES ( MY_rule_seq.NEXTVAL, 'YourApplicationID', 'MyRuleQuery2', 'YourRuleSetId', ' select col1,col2,col3 from some_other_table where some_date_col = $v.current_date$ ', SYSDATE); INSERT INTO MY_RULE (ID, APPID, RULE_ID, RULESET_ID, RULE_QUERY, UPDDT) VALUES ( MY_rule_seq.NEXTVAL, 'YourApplicationID', 'MyRuleQuery3', 'YourRuleSetId', ' select col1,col2,col3 from some_other_table where some_col = $v.input_param1_from_caller$ ', SYSDATE); INSERT INTO MY_RULE (ID, APPID, RULE_ID, RULESET_ID, RULE_QUERY, UPDDT) VALUES ( MY_rule_seq.NEXTVAL, 'YourApplicationID', 'MyRuleQuery4', 'YourRuleSetId', ' select col1,col2,col3 from some_other_table where some_col in $v.input_param2_list_from_caller$ ', SYSDATE); ----------------------- /** Testing the proc **/ CREATE OR REPLACE PROCEDURE prc_test AS begin /** Insert input_param and execute the proc in the same session **/ insert into MY_RE_INPUT_PARAM (exeid,key,value) values ( 1, 'PARAMETERNAME_1','apple'); insert into MY_RE_INPUT_PARAM (exeid,key,value) values ( 1, 'PARAMETERNAME_2_LIST','apple'); insert into MY_RE_INPUT_PARAM (exeid,key,value) values ( 1, 'PARAMETERNAME_2_LIST','mango'); insert into MY_RE_INPUT_PARAM (exeid,key,value) values ( 1, 'PARAMETERNAME_2_LIST','banana'); insert into MY_RE_INPUT_PARAM (exeid,key,value) values ( 1, 'PARAMETERNAME_2_LIST','avacado'); MY_RULESENGINE.pub_fireRules('YourApplicationID','YourRuleSetId',1); end; /
↧
Blog Post: onecommand fails to change storage cell name
It’s been a busy month – five Exadata deployments in the past three weeks and new personal best – 2x Exadata X6-2 Eighth Racks with CoD and storage upgrade deployed in only 6hrs! An issue I encountered with the first deployment was that onecommand wouldn’t change the storage cells names. The default cell names (not hostnames!) are based on where they are mounted within the rack and they are assigned by the elastic configuration script. The first cell name is ru02 (rack unit 02), the second cell is ru04, third is ru06 and so on. Now, if you are familiar with the cell and grid disks you would know that their names are based on the cell name. In other words, I got my cell, grid and ASM disks with the wrong names. Exachk would report the following failures for every grid disk: Grid Disk name DATA01_CD_00_ru02 does not have cell name (exa01cel01) suffix Naming convention not used. Cannot proceed further with automating checks and repair for bug 12433293 Apart from exachk complaining, I wouldn’t feel comfortable with similar names on my Exadata. Fortunately cell, grid and ASM disk names can be changed and here is how to do it: Stop the cluster and CRS on each compute node: /u01/app/12.1.0.2/grid/bin/crsctl stop cluster -all /u01/app/12.1.0.2/grid/bin/crsctl stop crs Login to each storage server and rename cell name, cell and grid disks, use the following to build the alter commands: You don’t need cell services shut but the grid disks shouldn’t be in use i.e. make sure to stop the cluster first! cell -e alter cell name=exa01cel01 for i in `cellcli -e list celldisk | awk '{print $1}'`; do echo "cellcli -e alter celldisk $i name=$i"; done | sed -e "s/ru02/exa01cel01/2" for i in `cellcli -e list griddisk | awk '{print $1}'`; do echo "cellcli -e alter griddisk $i name=$i"; done | sed -e "s/ru02/exa01cel01/2" If you get the following error restart the cell services and try again: GridDisk DATA01_CD_00_ru02 alter failed for reason: CELL-02548: Grid disk is in use. Start the cluster on each compute node: /u01/app/12.1.0.2/grid/bin/crsctl start crs We’ve got all cell and grid disks fixed, now we need to rename the ASM disks. To rename ASM disk you need to mount the diskgroup in restricted mode i.e. running on one node only and no one using it . If the diskgroup is not in restricted mode you’ll get: ORA-31020: The operation is not allowed, Reason: disk group is NOT mounted in RESTRICTED state. Stop the second compute node, default dbm01 database and the MGMTDB database: srvctl stop database -d dbm01 srvctl stop mgmtdb Mount diskgroups in restricted mode: If you are running 12.1.2.3.0+ and high redundancy DATA diskgroup, it is VERY likely that the voting disks are in the DATA diskgroup. Because of that, you wouldn’t be able to dismount the diskgroup. The only way I found around that was to force stop ASM and start it manually in a restricted mode: srvctl stop asm -n exa01db01 -f sqlplus / as sysasm startup mount restricted alter diskgroup all dismount; alter diskgroup data01 mount restricted; alter diskgroup reco01 mount restricted; alter diskgroup dbfs_dg mount restricted; Rename the ASM disks, use the following build the alter commands: select 'alter diskgroup ' || g.name || ' rename disk ''' || d.name || ''' to ''' || REPLACE(d.name,'RU02','exa01cel01') || ''';' from v$asm_disk d, v$asm_diskgroup g where d.group_number=g.group_number and d.name like '%RU02%'; select 'alter diskgroup ' || g.name || ' rename disk ''' || d.name || ''' to ''' || REPLACE(d.name,'RU04','exa01cel03') || ''';' from v$asm_disk d, v$asm_diskgroup g where d.group_number=g.group_number and d.name like '%RU04%'; select 'alter diskgroup ' || g.name || ' rename disk ''' || d.name || ''' to ''' || REPLACE(d.name,'RU06','exa01cel03') || ''';' from v$asm_disk d, v$asm_diskgroup g where d.group_number=g.group_number and d.name like '%RU06%'; Finally stop and start CRS on both nodes. It’s only when I thought everything was ok I discovered one more reference to those pesky names. These were the fail group names which again are based on the storage cell name. Following will make it more clear: select group_number,failgroup,mode_status,count(*) from v$asm_disk where group_number > 0 group by group_number,failgroup,mode_status; GROUP_NUMBER FAILGROUP MODE_ST COUNT(*) ———— —————————— ——- ———- 1 RU02 ONLINE 12 1 RU04 ONLINE 12 1 RU06 ONLINE 12 1 EXA01DB01 ONLINE 1 1 EXA01DB02 ONLINE 1 2 RU02 ONLINE 10 2 RU04 ONLINE 10 2 RU06 ONLINE 10 3 RU02 ONLINE 12 3 RU04 ONLINE 12 3 RU06 ONLINE 12 For each diskgroup we’ve got three fail groups (three storage cells). The other two fail groups EXA01DB01 and EXA01DB02 are the quorum disks. Unfortunately, you cannot rename failgroups in ASM. My immediate thought was to drop each failgroup and add it back with the intention that it will resolve the problem. Unfortunately, since this was a quarter rack I couldn’t do it, here’s an excerpt from the documentation: If a disk group is configured as high redundancy, then you can do this procedure on a Half Rack or greater. You will not be able to do this procedure on a Quarter Rack or smaller with high redundancy disk groups because ASM will not allow you to drop a failure group such that only one copy of the data remains (you’ll get an ORA-15067 error). The last option was to recreate the diskgroups. I’ve done this many times before when the compatible.rdbms parameter was set to too high and I had to install some earlier version of 11.2. However, since oracle decided to move the voting disks to DATA this became a bit harder. I couldn’t drop DBFS_DG because that’s where the MGMTDB was created, I couldn’t drop DATA01 either because of the voting disks and some parameter files. I could have renamed RECO01 diskgroup but decided to keep it “consistently wrong” across all three diskgroups. Fortunately, this behvaiour might change with the January 2017 release of OEDA. The following bug fix suggests that DBFS_DG will always be configured as high redundancy and host the voting disks: 24329542: oeda should make dbfs_dg as high redundancy and locate ocr/vote into dbfs_dg There is also a feature request to support failgroup rename but it’s not very popular, to be honest. Until we get this feature, exachk will report the following failure: failgroup name (RU02) for grid disk DATA01_CD_00_exa01cel01 is not cell name Naming convention not used. Cannot proceed further with automating checks and repair for bug 12433293 I’ve deployed five Exadata X6-2 machines so far and had this issue on all of them. This issue seems to be caused a bug in OEDA. The storage cell names should have been changed as part of step “Create Cell Disks” of onecommand. I keep the logs from some older deployments where it’s very clear that each cell was renamed as part of this step: Initializing cells... EXEC## |cellcli -e alter cell name = exa01cel01|exa01cel01.local.net|root| I couldn’t find that command in the logs of the deployements I did. Obviously, the solution for now, is to manually rename the cell before you run step “Create Cell Disks” of onecommand.
↧
Blog Post: Oracle Data Miner 4.2 New Features
Oracle Data Miner 4.2 (part of SQL Developer 4.2) got released as an Early Adopter versions (EA) a few weeks ago. I had an earlier blog post that looked that the new Oracle Advanced Analytics in-database new features with the Oracle 12.2 Database . With the new/updated Oracle Data Miner (ODMr) there are a number of new features. These can be categories as 1) features all ODMr users can use now, 2) New features that are only usable when using Oracle 12.2 Database, and 3) Updates to existing algorithms that have been exposed via the ODMr tool. The following is a round up of the main new features you can enjoy as part of ODMr 4.2 (mainly covering points 1 and 2 above) You can now schedule workflows to run based on a defined schedule Support for additional data types (RAW, ROWID, UROWID, URITYPE) Better support for processing JSON data in the JSON Query node Additional insights are displayed as part of the Model Details View Additional alert monitoring and reporting Better support for processing in-memory data A new R Model node that allows you to include in-database ORE user defined R function to support model build, model testing and applying of new model. New Explicit Semantic Analysis node (Explicit Feature Extraction) New Feature Compare and Test nodes New workflow status profiling perfoance improvements Refresh the input data definition in nodes Attribute Filter node now allows for unsupervised attribute importance ranking The ability to build Partitioned data mining models Look out for the blog posts on most of these new features over the coming months. WARNING: Most of these new features requires an Oracle 12.2 Database.
↧
↧
Blog Post: Confused by your error backtrace? Check the optimization level!
The DBMS_UTILITY.FORMAT_ERROR_BACKTRACE (and similar functionality in the UTL_CALL_STACK package) is a tremendously helpful function. It returns a formatted string that allows you to easily trace back to the line number on which an exception was raised. You know what else is really helpful? The automatic optimization performed by the PL/SQL compiler. The default level is 2, which does an awful lot of optimizing for you. But if you want to get the most out of the optimizer, you can ratchet it up to level 3, which then added subprogram inlining . Unfortunately, these two wonderful features don't mix all that well. Specifically, if you optimize at level 3, then the backtrace may not point all the way back to the line number in your "original" source code (without inlining, of course). Run this LiveSQL script to see the following code below "in action." ALTER SESSION SET plsql_optimize_level = 2 / CREATE OR REPLACE PROCEDURE proc1 IS l_level INTEGER; PROCEDURE inline_proc1 IS BEGIN RAISE PROGRAM_ERROR; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.put_line ('inline_proc1 handler'); DBMS_OUTPUT.put_line (DBMS_UTILITY.format_error_backtrace); RAISE; END; BEGIN SELECT plsql_optimize_level INTO l_level FROM user_plsql_object_settings WHERE name = 'PROC1'; DBMS_OUTPUT.put_line ('Opt level = ' || l_level); inline_proc1; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.put_line ('inline handler'); DBMS_OUTPUT.put_line (DBMS_UTILITY.format_error_backtrace); RAISE; END; / BEGIN proc1; END; / ALTER SESSION SET plsql_optimize_level = 3 / ALTER PROCEDURE proc1 COMPILE / BEGIN proc1; END; / Opt level = 2 inline_proc1 handler ORA-06512: at "STEVEN.PROC1", line 8 inline handler ORA-06512: at "STEVEN.PROC1", line 15 ORA-06512: at "STEVEN.PROC1", line 8 ORA-06512: at "STEVEN.PROC1", line 25 Opt level = 3 inline_proc1 handler ORA-06512: at "STEVEN.PROC1", line 25 inline handler ORA-06512: at "STEVEN.PROC1", line 25 ORA-06512: at "STEVEN.PROC1", line 25 I hope to have an update from the PL/SQL dev team on this topic soon, but I wanted to make you aware of this in case you get all confused and frustrated. Check your optimization level! Oh, how do you do that? Here you go: SELECT p.plsql_optimize_level FROM user_plsql_object_settings p WHERE name = 'PROC1' /
↧
Comment on Cómo Imprimir Informes Personalizados en formato PDF desde Código PL/SQL en Oracle Apex 5
Hola Emanuel, el paquete está preparado para que la salida sea en PDF. En cambio usando el Jasper Report Integration puedes usar esos formatos ya que lo puede preparar por el kit desarrollado por Aust Diestmar, si bien recuerdo puedes usar los formatos pdf,rtf,docx.xls,xlsx,csv,pptx,html, htm2. No he probado todos los formatos, es cuestión de que investigues por ese lado. Saludos
↧
Blog Post: Players for Logic Annual Championship for 2016
The following players will be invited to participate in the Logic Annual Championship for 2016, currently scheduled to take place on 4 April. The number in parentheses after their names are the number of championships in which they have already participated. Congratulations to all listed below on their accomplishment and best of luck in the upcoming competition! Name Rank Qualification Country Pavel Zeman (2) 1 Top 50 Czech Republic SteliosVlasopoulos (3) 2 Top 50 Belgium Marek Sobierajski (1) 3 Top 50 Poland mentzel.iudith (3) 4 Top 50 Israel Vyacheslav Stepanov (3) 5 Top 50 No Country Set James Su (3) 6 Top 50 Canada Rytis Budreika (3) 7 Top 50 Lithuania JasonC (3) 8 Top 50 United Kingdom Cor (2) 9 Top 50 Netherlands Köteles Zsolt (2) 10 Top 50 Hungary Kuvardin Evgeniy (2) 11 Top 50 Russia NickL (2) 12 Top 50 United Kingdom Chad Lee (3) 13 Top 50 United States NeilC (0) 14 Top 50 United Kingdom TZ (1) 15 Top 50 Lithuania D. Kiser (2) 16 Top 50 United States ted (3) 17 Top 50 United Kingdom MarkM. (3) 18 Top 50 Germany Elic (3) 19 Top 50 Belarus mcelaya (1) 20 Top 50 Spain Sandra99 (3) 21 Top 50 Italy tonyC (2) 22 Top 50 United Kingdom seanm95 (3) 23 Top 50 United States Talebian (2) 24 Top 50 Netherlands richdellheim (3) 25 Top 50 United States Arūnas Antanaitis (1) 26 Top 50 Lithuania ratte2k4 (1) 27 Top 50 Germany umir (3) 28 Top 50 Italy Kanellos (2) 29 Top 50 Greece NielsHecker (3) 30 Top 50 Germany Andrii Dorofeiev (2) 31 Top 50 Ukraine Mehrab (3) 32 Top 50 United Kingdom JustinCave (3) 33 Top 50 United States krzysioh (2) 34 Top 50 Poland Stanislovas (0) 35 Top 50 Lithuania Vladimir13 (1) 36 Top 50 Russia danad (3) 37 Top 50 Czech Republic RalfK (2) 38 Top 50 Germany YuanT (3) 39 Top 50 United States Mike Tessier (1) 40 Top 50 Canada Vijay Mahawar (3) 41 Top 50 No Country Set Eric Levin (2) 42 Top 50 United States whab@tele2.at (1) 43 Top 50 Austria puzzle1fun (0) 44 Top 50 No Country Set Sartograph (1) 45 Top 50 Germany tonywinn (1) 46 Top 50 Australia dovile (0) 47 Top 50 Lithuania Jeff Stephenson (0) 48 Top 50 No Country Set craig.mcfarlane (2) 49 Top 50 Norway Paresh Patel (0) 50 Top 50 No Country Set
↧
Blog Post: Formatting results from ORE script in a SELECT statement
This blog post looks at how to format the output or the returned returns from an Oracle R Enterprise (ORE), user defined R function, that is run using a SELECT statement in SQL. Sometimes this can be a bit of a challenge to work out, but it can be relatively easy once you have figured out how to do it. The following examples works through some scenarios of different results sets from a user defined R function that is stored in the Oracle Database. To run that user defined R function using a SELECT statement I can use one of the following ORE SQL functions. rqEval rqTableEval " rqGroupEval " rqRowEval For simplicity we will just use the first of these ORE SQL functions to illustrate the problem and how to go about solving it. The rqEval ORE SQL function is a generate purpose function to call a user defined R script stored in the database. The function does not require any input data set and but it will return some data. You could use this to generate some dummy/test data or to find some information in the database. Here is noddy example that returns my name. BEGIN --sys.rqScriptDrop('GET_NAME'); sys.rqScriptCreate('GET_NAME', 'function() { res res } '); END; To call this user defined R function I can use the following SQL. select * from table(rqEval(null, 'select cast(''a'' as varchar2(50)) from dual', 'GET_NAME') ); For text strings returned you need to cast the returned value giving a size. If we have a numeric value being returned we can don't have to use the cast and instead use '1' as shown in the following example. This second example extends our user defined R function to return my name and a number. BEGIN sys.rqScriptDrop('GET_NAME'); sys.rqScriptCreate('GET_NAME', 'function() { res res } '); END; To call the updated GET_NAME function we now have to process two returned columns. The first is the character string and the second is a numeric. select * from table(rqEval(null, 'select cast(''a'' as varchar2(50)) as "NAME", 1 AS YEAR from dual', 'GET_NAME') ); These example illustrate how you can process character strings and numerics being returned by the user defined R script. The key to setting up the format of the returned values is knowing the structure of the data frame being returned by the user defined R script. Once you know that the rest is (in theory) easy.
↧
↧
Blog Post: DevOps for Docker - A Trend for 2017
Docker container has revolutionized the use of software with its modularity, platform independence, efficient use of resources, and fast installation time. Docker lends itself to the DevOps approach to software development. This article looks at the technology behind the trend: DevOps for Docker. The benefits of DevOps include: - Reduction in time-to-market - Application Updates in real-time - Reliable releases - Improved product quality - Improved Agile environment What is DevOps? Agile software development is based around adaptive software development, continuous improvement and early delivery. But one of the fallouts of Agile has been an increase in the number of releases, to the point where they can’t always be implemented reliably and quickly. In response to this situation, the objective of DevOps is to establish organizational collaboration between the various teams involved in software delivery and to automate the process of software delivery so that a newrelease can be tested, deployed and monitored continuously. DevOps is derived from combining Dev elopment and Op erations. It seeks to automate processes and involves collaboration between software developers, operations and quality assurance teams. Using a DevOps approachwith Docker can create a continuous software delivery pipeline from GitHub code to application deployment. How Can DevOps Be Used With Docker? A Docker container is run by using a Docker image, which may be available locally or on a repository such as the Docker Hub. Suppose, as a use case, that MySQL database or some other database makes new releases available frequently with minor bug fixes or patches. How to make the new release available for end users without a time lag? Docker images are associated with code repositories--a GitHub code repo or some other repository such as AWS CodeCommit. If a developer were to build a Docker image from the GitHub code repo and make it available on Docker Hub to end users, and if the end users were to deploy the Docker image as Docker containers it would involve several individually run phases: 1. Build the GitHub Code into a Docker Image (with docker build command). 2. Test the Docker Image (with docker run command). 3. Upload the Docker Image to Docker Hub (with docker push command). 4. End user downloads the Docker image (with docker pull command). 5. End user runs a Docker container (with docker run command). 6. End user deploys an application (using AWS Elastic Beanstalk, for example). 7. End user monitors the application. The GitHub to Deployment pipeline is shown in following illustration. As a new MySQL database release becomes available with new bug fixes over a short duration (which could be aday) the complete process would need to be repeated. But a DevOps approach could be used to make the Docker image pipeline from GitHub to Deployment continuous and not require user or administrator intervention. DevOps Design Patterns “A software design pattern is a general reusable solution to a commonly occurring problem" - Wikipedia. The DevOps design patterns are centered on continuous code integration, continuous software testing, continuous software delivery, and continuous deployment. Automation, collaboration, and continuous monitoring are some of the other design patterns used in DevOps. Continuous Code Integration Continuous code integration is the process of continuously integrating source code into a new build or release. The source code could be in a local repository or on GitHub or AWS CodeCommit. Continuous Software Testing Continuous software testing is continuous testing of a new build or release. Tools such as Jenkins provide several features for testing; for instance, user input at each phase of a Jenkins Pipeline. Jenkins provides plugins such as Docker Build Step Plugin for testing each Docker application phase separately: running a Docker container, uploading a Docker image and stopping a Docker container. Continuous Software Delivery Continuous software delivery is making new software builds available to end users for deployment inproduction. For a Docker application, continuous delivery involves making each new version/tag of a Docker image available on a repository such as Docker Hub or an Amazon EC2 Container repository. Continuous Software Deployment Continuous software deployment involves deploying the latest release of a Docker image continuously, such that each time a new version/tag of a Docker image becomes available the Docker image gets deployed inproduction. Kubernetes Docker container manager already provides features such as rolling updates to updatea Docker image to the latest without interruption of service. The use of Jenkins rolling updates may be automated, such that each time a new version/tag of a Docker image becomes available the Docker image deployed is updated continuously. Continuous Monitoring Continuous monitoring is the process of monitoring a running application. Tools such as Sematext can be used to monitor a Docker application. A Sematext Docker Agent can be deployed to monitor Kubernetes clustermetrics and collect logs. Automation One type of automation that can be made for Docker applications is to automate the installation of tools such as Kubernetes, whose installation is quite involved. Kubernetes 1.4 includes a new tool called “kubeadm” to automate the installation of Kubernetes on Ubuntu and CentOS. The kubeadm tool is not supported on CoreOS. Collaboration Collaboration involves cross-team work and sharing of resources. As an example, different development teams could be developing code for different versions (tags) of a Docker image on a GitHub repository and all the Docker image tags would be built and uploaded to Docker Hub simultaneously and continuously. Jenkins provides the Multibranch Pipeline project to build code from multiple branches of a repository such as the GitHub repository. Another example of collaboration involves Helm charts, pre-configured Kubernetes resources that can be used directly. For example, pre-configured Kubernetes resources for MySQL database are available as Helm charts inthe kubernetes/charts repository and helm/charts repository. Helm charts eliminate the need to develop complex applications or perform complex upgrades to software that has already been developed. What are some of the DevOps Tools? Jenkins Jenkins is a commonly used automation and continuous delivery tool that can be used to build, test and deliver Docker images continuously. Jenkins provides several plugins that can be used with Docker, including Docker Plugin, Docker Build Step Plugin, and Amazon EC2 Plugin. With Amazon EC2 Plugin, a Cloud configuration can be used to provision Amazon EC2 instances dynamically for Jenkins Slave agents. Docker Plugin can be used to configure a cloud to run a Jenkins project in a Docker container. Docker Build Step Plugin is used to test a Docker image for individual phases, such as to build a Docker image, run a container, push a Docker image to Docker hub and stop and remove a Docker container. Jenkins itself may be run as a Docker container using the Docker image “jenkins”. AWS CodeCommit and AWS CodeBuild Amazon AWS provides several tools for DevOps. AWS CodeCommit is a version control service similar to GitHub to store and manage source code files. AWS CodeBuild is a DevOps tool to build and test code. The code to be built can be integrated continuously from the GitHub or CodeCommit and the output Docker image from CodeBuild can be uploaded to Docker Hub or to Amazon EC2 Container Registry as a build completes. What CodeBuild provides is an automated and continuousprocess for the Build, Test, Package and Deliver phases shown in the Pipeline flow diagram in this article. Alternatively, CodeBuild output could be stored in an Amazon S3 Bucket. AWS Elastic Beanstalk AWS Elastic Beanstalk is another AWS DevOps tool, used for deploying and scaling Docker applications on the cloud. AWS Beanstalk provides automatic capacity provisioning, load balancing, scaling and monitoring for Docker application deployments. A Beanstalk application and environment can be created from a Dockerfile packaged as a zip file that includes the other application resources, or just from an unpackaged Dockerfile if not including other resource files. Alternatively, the configuration for a Docker application including the Docker image and environment variables could be specified in a Dockerrun.aws.json file. An exampleDockerrun.aws.json file is listed in which multiple containers are configured, one of which is for MySQLdatabase and the other for nginx server: { "AWSEBDockerrunVersion": 2, "volumes": [ { "name": "mysql-app", "host": { "sourcePath": "/var/app/current/mysql-app" } }, { "name": "nginx-proxy-conf", "host": { "sourcePath": "/var/app/current/proxy/conf.d" } } ], "containerDefinitions": [ { "name": "mysql-app", "image": "mysql", "environment": [ { "name": "MYSQL_ROOT_PASSWORD", "value": "mysql" }, { "name": "MYSQL_ALLOW_EMPTY_PASSWORD", "value": "yes" }, { "name": "MYSQL_DATABASE", "value": "mysqldb" }, { "name": "MYSQL_PASSWORD", "value": "mysql" } ], "essential": true, "memory": 128, "mountPoints": [ { "sourceVolume": "mysql-app", "containerPath": "/var/mysql", "readOnly": true } ] }, { "name": "nginx-proxy", "image": "nginx", "essential": true, "memory": 128, "portMappings": [ { "hostPort": 80, "containerPort": 80 } ], "links": [ "mysql-app" ], "mountPoints": [ { "sourceVolume": "mysql-app", "containerPath": "/var/mysql", "readOnly": true }, { "sourceVolume": "nginx-proxy-conf", "containerPath": "/etc/nginx/conf.d", "readOnly": true } ] } ] } The Beanstalk application deployed can be monitored in a dashboard. AWS CodePipeline AWS CodePipeline is a continuous integration and continuous delivery service that can be used to build, test and deploy code every time the code is updated on the GitHub repository or CodeCommit. You can just start a CodePipeline that integrates Docker image code from a GitHub (or CodeCommit) repo, builds the code into a Docker image and tests the code using Jenkins, and deploys the Docker image to an AWS Beanstalk application deployment environment; and eliminate the need for user intervention in updates for continuous deployment of a Docker application. Every time code is updated in the source repository, the complete CodePipeline re-runs and the deployment is updated without any discontinuation in service. CodePipeline provides graphical user interfaces such as the following to monitor the progress of a CodePipeline in each of the phases: Source code integration, Build & Test, and Deployment. In this article we discussed a new trend in Docker software development called DevOps. DevOps takes Agile software development to the next level. The article discussed several new tools and best practices for Docker software use. DevOps is an emerging trend and most suitable for Docker containerized applications. Deepak Vohra has published two books on Docker: Pro Docker & Kubernetes Microservices with Docker .
↧
Blog Post: using RDA pre Install check for EBS R12.2
Every successful installation needs good preparation and if some thing is missed from the pre-requisite then it will fail the installation or it may effect functioning of the installed application at the later stage. Oracle EBS R12.2 rapid install itself will perform initial pre checks to validate all pre-requsites exists. In my experience all rapid install checks passed but still installtion will fail. So I highly recommend to use RDA pre install check option before installing the required software. But RDA will not support all oracle products. Lets see how we can use the RDA (remote diagnostic agent) for checking pre install requirements for EBS R12.2. The detail information about RDA can be found on MOS note “Remote Diagnostic Agent (RDA) – Getting Started (Doc ID 314422.1)” Download the software from MOS: > Copy and unzip the downloaded patch on Server: [root@racnode1 sf_shareEBS]# ls -lrt total 15506 -rwxrwx---. 1 root vboxsf 15877229 Jan 12 10:09 p21769913_814161213_Linux-x86-64.zip [root@racnode1 sf_shareEBS]# unzip p21769913_814161213_Linux-x86-64.zip > Execute "rda.sh" script: [root@racnode1 rda]# sh rda.sh -T hcve Processing HCVE tests ... Available Pre-Installation Rule Sets: 1. Oracle Database 10g R1 (10.1.0) Preinstall (Linux) 2. Oracle Database 10g R2 (10.2.0) Preinstall (Linux) 3. Oracle Database 11g R1 (11.1) Preinstall (Linux) 4. Oracle Database 11g R2 (11.2.0) Preinstall (Linux) 5. Oracle Database 12c R1 (12.1.0) Preinstallation (Linux) 6. Oracle Identity and Access Management PreInstall Check: Oracle Identity and Access Management 11g Release 2 (11.1.2) Linux 7. Oracle JDeveloper PreInstall Check: Oracle JDeveloper 11g Release 2 (11.1.2.4) Linux 8. Oracle JDeveloper PreInstall Check: Oracle JDeveloper 12c (12.1.3) Linux 9. OAS PreInstall Check: Application Server 10g R2 (10.1.2) Linux 10. OAS PreInstall Check: Application Server 10g R3 (10.1.3) Linux 11. OFM PreInstall Check: Oracle Fusion Middleware 11g R1 (11.1.1) Linux 12. OFM PreInstall Check: Oracle Fusion Middleware 12c (12.1.3) Linux 13. Oracle Forms and Reports PreInstall Check: Oracle Forms and Reports 11g Release 2 (11.1.2) Linux 14. Portal PreInstall Check: Oracle Portal Generic 15. IDM PreInstall Check: Identity Management 10g (10.1.4) Linux 16. BIEE PreInstall Check: Business Intelligence Enterprise Edition 11g (11.1.1) Generic 17. EPM PreInstall Check: Enterprise Performance Management Server (11.1.2) Generic 18. Oracle Enterprise Manager Cloud Control PreInstall Check: Oracle Enterprise Manager Cloud Control 12c Release 4 (12.1.0.4) Linux 19. Oracle E-Business Suite Release 11i (11.5.10) Preinstall (Linux x86 and x86_64) 20. Oracle E-Business Suite Release 12 (12.1.1) Preinstall (Linux x86 and x86_64) 21. Oracle E-Business Suite Release 12 (12.2.0) Preinstall (Linux x86_64) Available Post-Installation Rule Sets: 22. RAC 10G DB and OS Best Practices (Linux) 23. Data Guard Postinstall (Generic) 24. WLS PostInstall Check: WebLogic Server 11g (10.3.x) Generic 25. WLS PostInstall Check: WebLogic Server 12c (12.x) Generic 26. Portal PostInstall Check: Oracle Portal Generic 27. OC4J PostInstall Check: Oracle Containers for J2EE 10g (10.1.x) Generic 28. SOA PostInstall Check: Service-Oriented Architecture 11g and Later Generic 29. OSB PostInstall Check: Service Bus 11g and Later Generic 30. Oracle Forms 11g Post Installation (Generic) 31. Oracle Enterprise Manager Agent 12c Post Installation (Generic) 32. Oracle Management Server 12c Post Installation (Generic) 33. Network Charging and Control Database Post Installation (Generic) Enter the HCVE rule set number or 0 to cancel the test Press Return to accept the default (0) > Enter the HCVE rule set number or 0 to cancel the test Press Return to accept the default (0) > 21 Performing HCVE checks ... Enter value for > /u012/EBS Test "Oracle E-Business Suite Release 12 (12.2.0) Preinstall (Linux x86_64)" executed at 12-Jan-2017 10:16:28 Test Results ~~~~~~~~~~~~ ID NAME RESULT VALUE ====== ==================== ======= ========================================== A00100 OS Type RECORD OL6 64 A00200 OS Certified? FAILED Not certified Oracle Linux version A01010 ApplTierDirectory RECORD /u012/EBS A01020 A_T Valid? PASSED ATexists A01030 A_T Permissions OK? PASSED CorrectPerms A01040 A_T Disk Space FAILED NotOK A01400 Got Software Tools? PASSED tools_found A02030 Limit Processes SKIPPED Not on certified Linux system A02050 Limit Descriptors SKIPPED Not on certified Linux system A02100 ENV Variable Unset SKIPPED Not SuSE Linux Enterprise 10 or SuSE ... A02210 Kernel Params OK? SKIPPED Not on certified Linux system A02240 NPTL Selected? SKIPPED Not on certified Linux system A03010 Space in tmp PASSED Available A03050 Swap Space (MB) RECORD 4863.99609375 A03060 Swap Space? FAILED Need at least 16 GB A03510 IP Address RECORD NotFound A03530 Domain Name RECORD NotFound A03540 /etc/hosts format FAILED No entry found A03550 DNS Lookup FAILED Cannot determine IP address A03560 Net Service Access? PASSED NonExist A03570 Port 6000 PASSED Free A03580 Port Range OK? SKIPPED Not on certified Linux system A03590 DNS Settings FAILED ATTEMPTSUndef TIMEOUTUndef A03600 SysNetw File FAILED Missing host.domain A03610 NoNetwProf File PASSED OK A04301 RPM OL5/64 OK? SKIPPED Not Oracle Linux 5 64-bit A04302 RPM OL6/64 OK? FAILED [openmotif21(32-bit)] not installed [... A04303 RPM OL7/64 OK? SKIPPED Not Oracle Linux 7 64-bit A04311 RPM RH5/64 OK? SKIPPED Not Red Hat Enterprise Linux 5 64-bit A04312 RPM RH6/64 OK? SKIPPED Not Red Hat Enterprise Linux 6 64-bit A04313 RPM RH7/64 OK? SKIPPED Not Red Hat Enterprise Linux 7 64-bit A04321 RPM SLES10/64 OK? SKIPPED Not SuSE Linux Enterprise 10 64-bit A04322 RPM SLES11/64 OK? SKIPPED Not SuSE Linux Enterprise 11 64-bit Result file: output/collect/APPS_HCVE_A_EBS122_lin_res.htm [root@racnode1 rda]# Here it will list the available rules for pre and post installation checks, just we need to provide the number for which we are performing the pre/post install check. In this article we are checking for Oracle EBS R12.2 (Rule 21). It will generate the output in txt and html format. The report will have 3 section – “TEST Results, Failed summary and Detailed summary”. In failed summary it will provide the suggestion how the failed check can be fixed. Summary: I was involved in many Oracle EBS Installation and i can surely say RDA pre install check is very helpful and handy. I highghly recommend to use RDA for customers who installing or upgrade and EBS systems.
↧
Blog Post: Who Decommissioned My Enterprise Manager Agent?
I prefer to write blog posts about the interesting questions on OTN. This blog post is one of them. There are usually more than one EM admins managing the systems, and you may want to track other users’ activity. Enterprise Manager Cloud Control provides auditing mechanism called “comprehensive auditing”. It’s not enabled by default for all actions because it may consume a lot of disk space. If you want to enable it for all actions, you should use “emcli” tool: ./emcli login -username=SYSMAN ./emcli enable_audit After you enable comprehensive auditing for all actions, you can go to “setup” >> “security” >> “audit data” to see all audited actions. The audit data page, provides filtering on audit records so I can easily list who deleted a target from the system. If you haven’t enabled comprehensive auditing for all actions on Enterprise Manager, auditing is enabled only for login/logouts and infrastructure operations (such as removing EM key from repository, applying an update, creating CA etc..). What if you haven’t enabled comprehensive auditing and someone decommission/remove an agent from the system? In this case, you can still find who did it (at least narrow the possibilities) by searching the access logs of OHS (Oracle Httpd Server installed as a part of Weblogic and EM13c). The access logs are located in EM_INSTANCE_BASE/user_projects/domains/GCDomain/servers/ohs1/logs/ folder. You can check my blog post about log locations of EM13c. You may wonder which keywords you’ll search. If you want to find the agent decommission, try to do it on EM13c, check the URL of the page, you’ll see something like “/em/faces/agentDecommision?target=….”. The agentDecommision is the keyword we’re looking for. When we run “grep agentDecommision access_log”, we’ll see an output similar to the below text: grep agentDecommision access_log access_log:192.168.16.225 - - [24/Jan/2017:23:29:28 +0300] "POST /em/faces/agentDecommision?target=xxxxx.com%3A3872&type=oracle_emd HTTP/1.1" 200 78 [ecid: 1.005Hhnt2eRP9HfGayxzW6G0001BT001tIp;kXjE] [User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36] We can easily say that the agent is decommissioned at 24/Jan/2017:23:29:28, by a Mac user whose IP is 192.168.16.225. Now we can search for logins on audit data of EM (using the audit data page) and identify the EM user who took the action. Related Posts OMS Upgrade Fails At Repository Configuration With Error ORA-20251 OUGN 2016 Spring Conference EM13c: Unique Database Service Names on DBaaS EM13c: How to Disable Autodiscovery (and Autopromotion) of Clusterware Managed Targets Oracle Enterprise Manager Plugin for PostgreSQL
↧