Quantcast
Channel: Oracle
Viewing all 4975 articles
Browse latest View live

Blog Post: Repost: Oracle Trivia Quiz

$
0
0
All the answers to the quiz are in the November 2014 issue of the NoCOUG Journal . I am the editor of the NoCOUG Journal. What’s NoCOUG, you ask? Only the oldest and most active Oracle users group in the world. If you have not attended a NoCOUG conference in the last 28 years—yes that’s how long NoCOUG has been around—you can attend the winter conference on January 27 for free subject to availability of free passes. The speakers include Tom Kyte, Steven Feuerstein, and Doug Cutting. Yes, that Doug Cutting. Which executive vice-president of product development at Oracle began as the PL/SQL product manager? (page 23) Which senior vice-president of server technologies at Oracle wrote the B-Tree indexing code back in the day? (page 23) What is the evil twin of relational algebra? (page 17) If you only have SELECT privilege on a table, can you also “SELECT FOR UPDATE”? (page 19) Who coined the term “compulsive tuning disorder”? (page 13 footnote) What percentage of women leave high technology at the mid-career mark? (page 4) How can you empower application developers with access to live performance data from the production database? (page 6) What kind of optimization is the root of all evil according to Professor Donald Knuth? (page 10) Can UNION ALL materialized views be fast-refreshed on commit? (page 18) What will be the winning lottery numbers for the next Powerball drawing (just checking if you’re awake) P.S. A 10-year digital archive of the NoCOUG Journal is at  http://nocoug.wordpress.com/nocoug-journal-archive/ .  

File: 12c_Opatches.doc

File: Network_files_12c.doc

File: db_account_unlock.sql

Blog Post: Cedar’s new website is live – get ready for the blog!

$
0
0
I’m really pleased that Cedar have got our new website live – just in time for UKOUG Apps 14. As you would expect it highlights the services that Cedar provides – both Oracle Cloud (Fusion and Taleo) and obviously PeopleSoft implementation, hosting and support. It contains details of our people and locations (we’ve offices in Kings Cross, London, plus India, Switzerland and Australia).  It also contains case studies of some of the project successes that we’ve had, and some of the nice things that clients have said about us. One of the things I’m most excited about is the blog. Make sure you add it to your feed reader as we’re going to be sharing some good content there from all of the practices within our company (plus the occasional post of us doing fun things!).

Blog Post: Struggling with RAC Installation – ORA-15018: diskgroup cannot be created

$
0
0
I said it before. It was only once that I succeeded to install Oracle Clusterware without any issues and that was during OCM exam I didn’t hit any bug, I didn’t re-configured anything. The installation went smooth. But … Today, I got all following errors : ORA-15032: not all alterations performed ORA-15131: block of file in diskgroup could not be read ORA-15018: diskgroup cannot be created ORA-15031: disk specification ‘/dev/mapper/mpathh’ matches no disks ORA-15025: could not open disk “/dev/mapper/mpathh” ORA-15056: additional error message ORA-15017: diskgroup “OCR_MIRROR” cannot be mounted ORA-15063: ASM discovered an insufficient number of disks for diskgroup “OCR_MIRROR” ORA-15033: disk ‘/dev/mapper/mpathh’ belongs to diskgroup “OCR_MIRROR” In the beginning, while installing Oracle 11gRAC, I got the following error: CRS-2672: Attempting to start ‘ora.diskmon’ on ‘vsme_ora1′ CRS-2676: Start of ‘ora.diskmon’ on ‘vsme_ora1′ succeeded CRS-2676: Start of ‘ora.cssd’ on ‘vsme_ora1′ succeeded   Disk Group OCR_MIRROR creation failed with the following message: ORA-15018: diskgroup cannot be created ORA-15031: disk specification ‘/dev/mapper/mpathh’ matches no disks ORA-15025: could not open disk “/dev/mapper/mpathh” ORA-15056: additional error message     Configuration of ASM … failed see asmca logs at /home/oracle/app/cfgtoollogs/asmca for details Did not succssfully configure and start ASM at /home/oracle/11.2.4/grid1/crs/install/crsconfig_lib.pm line 6912. /home/oracle/11.2.4/grid1/perl/bin/perl -I/home/oracle/11.2.4/grid1/perl/lib -I/home/oracle/11.2.4/grid1/crs/install /home/oracle/11.2.4/grid1/crs/install/rootcrs.pl execution failed   Bad news is that the installation failed. Good news is that I can easily restart the installation again without any issues, as the root.sh script is rest restartable . If you don’t need to install the software on all nodes again, solve the problem and run root.sh script again. If the problem is solved, it will sun smoothly. If you need to install the software on all nodes, you have to deconfigure and run the installation again. To remove the failed RAC installation, run rootcrs.pl script on all nodes except the last one, as follows: $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig –force   Run the following command on the last node: $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig –force –lastnode   Now, run ./runInstaller command and start the installation again.   So let’s go back to the problem. It was claiming that “disk specification ‘/dev/mapper/mpathh’ matches no disks”. Hmm … The first thing that came in my mind was permission of the disk. So I checked it, it was root:disk. I changed it to oracle:dba and run root.sh script. Got the same problem again. I checked the following log file: /home/oracle/app/cfgtoollogs/asmca   [main] [ 2014-12-09 17:26:29.220 AZT ] [UsmcaLogger.logInfo:143]  CREATE DISKGROUP SQL: CREATE DISKGROUP OCR_MIRROR EXTERNAL REDUNDANCY  DISK ‘/dev/mapper/mpathh’ ATTRIBUTE ‘compatible.asm’=’11.2.0.0.0′,’au_size’=’1M’ [main] [ 2014-12-09 17:26:29.295 AZT ] [SQLEngine.done:2189]  Done called [main] [ 2014-12-09 17:26:29.296 AZT ] [UsmcaLogger.logException:173]  SEVERE:method oracle.sysman.assistants.usmca.backend.USMDiskG roupManager:createDiskGroups [main] [ 2014-12-09 17:26:29.296 AZT ] [UsmcaLogger.logException:174]  ORA-15018: diskgroup cannot be created ORA-15031: disk specification ‘/dev/mapper/mpathh’ matches no disks ORA-15025: could not open disk “/dev/mapper/mpathh” ORA-15056: additional error message   Oracle  wasn’t able to create the diskgroup claiming that the specified device matches no disks. I logged in to the ASM instance and tried to create the diskgroup by my own: SQL CREATE DISKGROUP OCR_MIRROR EXTERNAL REDUNDANCY  DISK ‘/dev/mapper/mpathh’ ATTRIBUTE ‘compatible.asm’=’11.2.0.0.0′,’au_size’=’1M’;   SQL CREATE DISKGROUP OCR_MIRROR EXTERNAL REDUNDANCY  DISK ‘/dev/mapper/mpathh’ ATTRIBUTE ‘compatible.asm’=’11.2.0.0.0′,’au_size’=’1M’ ERROR at line 1: ORA-15018: diskgroup cannot be created ORA-15031: disk specification ‘/dev/mapper/mpathh’ matches no disks ORA-15025: could not open disk “/dev/mapper/mpathh” ORA-15056: additional error message Linux-x86_64 Error: 13: Permission denied Additional information: 42 Additional information: -807671168   I checked the permission, it was root:disk . I changed it to oracle:dba and run the command again. SQL  CREATE DISKGROUP OCR_MIRROR EXTERNAL REDUNDANCY  DISK ‘/dev/mapper/mpathh’ ATTRIBUTE ‘compatible.asm’=’11.2.0.0.0′,’au_size’=’1M’ ERROR at line 1: ORA-15018: diskgroup cannot be created ORA-15017: diskgroup “OCR_MIRROR” cannot be mounted ORA-15063: ASM discovered an insufficient number of disks for diskgroup “OCR_MIRROR”   I run the query again, this time got different message: SQL  CREATE DISKGROUP OCR_MIRROR EXTERNAL REDUNDANCY  DISK ‘/dev/mapper/mpathh’ ATTRIBUTE ‘compatible.asm’=’11.2.0.0.0′,’au_size’=’1M’ ERROR at line 1: ORA-15018: diskgroup cannot be created ORA-15033: disk ‘/dev/mapper/mpathh’ belongs to diskgroup “OCR_MIRROR”     I tried to mount the diskgroup and got the following error: SQL alter diskgroup ocr_mirror mount; alter diskgroup ocr_mirror mount * ERROR at line 1: ORA-15032: not all alterations performed ORA-15017: diskgroup “OCR_MIRROR” cannot be mounted ORA-15063: ASM discovered an insufficient number of disks for diskgroup  “OCR_MIRROR”   I checked the permission. It was changed again! I changed it back to oracle:dba and tried to mount the diskgroup and got the following error! SQL  alter diskgroup ocr_mirror mount ERROR at line 1: ORA-15032: not all alterations performed ORA-15131: block  of file  in diskgroup  could not be read   Ohhh … Come on! I logged to the ASM instance, and queried the v$asm_disk and v$asm_diskgroup views. SQL select count(1) from v$asm_disk;    COUNT(1) ———-          0   I changed permission to oracle:dba and run the query again: SQL /   COUNT(1) ———-          1   I run again:   SQL select count(1) from v$asm_diskgroup;     COUNT(1) ———-          0   What??? The permission is changed automatically while I query V$ASM_DISKGROUP view? Yes … Even, when you query V$ASM_DISKGROUP , Oracle checks ASM_DISKSTRING parameter and query the header of all disks that are listed in that parameter. For more information on this topic, you can check my following blog post: V$ASM_DISKGROUP displays information from the header of ASM disks So, this means that when I query V$ASM_DISK view, Oracle scan the disk (with the process that runs under root user) and change the permission of the disk. After making change to the /etc/udev/rules.d/99-oracle-asmdevices.rules file and adding the following line, the problem solved: NAME=”/dev/mapper/mpathh”, OWNER=”oracle”, GROUP=”dba”, MODE=”0660″   So I checked the permission of the disks again after querying V$ASM_DISK multiple time, and made sure that it doesn’t change the permission of the disk and run root.sh script. Everything worked fine and I got the following output: ASM created and started successfully. Disk Group OCR_MIRROR mounted successfully. clscfg: -install mode specified Successfully accumulated necessary OCR keys. Creating OCR keys for user ‘root’, privgrp ‘root’.. Operation successful. CRS-4256: Updating the profile Successful addition of voting disk 5feed4cb66df4f43bf334c3a8d73af92. Successfully replaced voting disk group with +OCR_MIRROR. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ##  STATE    File Universal Id                File Name Disk group –  —–    —————–                ——— ———  1. ONLINE   5feed4cb66df4f43bf334c3a8d73af92 (/dev/mapper/mpathh) [OCR_MIRROR] Located 1 voting disk(s). CRS-2672: Attempting to start ‘ora.asm’ on ‘vsme_ora1′ CRS-2676: Start of ‘ora.asm’ on ‘vsme_ora1′ succeeded CRS-2672: Attempting to start ‘ora.OCR_MIRROR.dg’ on ‘vsme_ora1′ CRS-2676: Start of ‘ora.OCR_MIRROR.dg’ on ‘vsme_ora1′ succeeded Preparing packages for installation… cvuqdisk-1.0.9-1 Configure Oracle Grid Infrastructure for a Cluster … succeeded    

Wiki Page: Extents Allocation: Round-Robin

$
0
0
DBA's perform many tasks related to datafiles and tablespaces like creation of them but also creation of segments. Every segment as you know have extents and finally those extents are allocated in Datafiles in blocks. Usually we perform those tasks without thinking what happen behind that process, what happen with extents and the datafiles, how extents are allocated, this is what you will read in this article. In this article I will show you few examples where you will be able to understand how the extents are allocated in datafiles. I will analyse only Locally Managed Tablespaces. If you have been reading my articles you should know already that I like to write my articles with a "Concept" followed by "The example/Internals" fashion. Well, let me give you the Concept of this Article: "Extents are allocated in datafiles in round-robin Fashion". Yes, in round-robin fashion. Some people could think that first a datafile is filled up and then the next datafile starts to get filled up but it's not that way. In order to explain you this, let's go to examples. In this example I will Create a Locally Managed Tablespace with the name " tbslocal " and I will use the table " dgomez ". Note: These examples were applied on the following versions and it's the same behavior, this concept applies as well. 10.2.0.1.0 11.2.0.4.0 12.1.0.1.0 Extent Management Local Uniform: Creating the tablespace: SQL create tablespace tbslocal datafile size 10m, size 10m, size 10m extent management local uniform size 64k; 2 Tablespace created. Creating our Segment: SQL create table dgomez (id number, value varchar2(40)) tablespace TBSLOCAL; Table created. When you create the Table Segment 1 extents is allocated. SQL select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ; FILE_ID     EXTENT_ID  BLOCKS      KB ---------- ---------- ---------- ---------- 8           0         8           64 Let's add 2 more extents to the segment manually: SQL alter table dgomez allocate extent; Table altered. SQL alter table dgomez allocate extent; Table altered. And now let's check the result: SQL select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ; FILE_ID     EXTENT_ID    BLOCKS     KB ---------- ---------- ---------- ---------- 8       0       8       64 7       1       8       64 6       2       8       64 As you can see our extents were allocated in round-robin fashion. But do you remember what kind of tablespace we created? In this example I was using " EXTENT MANAGEMENT LOCAL UNIFORM ". There is a little difference between " EXTENT MANAGEMENT LOCAL UNIFORM " and " EXTENT MANAGEMENT LOCAL AUTOALLOCATE " and that difference is what we will see in the following example: Extent Management Local Autoallocate:  With Autoallocate, Oracle tries to understand what kind of segments are being created in this tablespace, it analyse the data and then based on that it creates the extents, next extent could be bigger than the last one. That's why we have a little difference using "Autoallocate". SQL drop tablespace tbslocal including contents and datafiles; Tablespace dropped. SQL create tablespace tbslocal datafile size 10m, size 10m, size 10m extent management local autoallocate ; 2 Tablespace created. SQL create table dgomez (id number, value varchar2(40)) tablespace TBSLOCAL; Table created. First extent created: SQL select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ; FILE_ID     EXTENT_ID     BLOCKS     KB ---------- ---------- ---------- ---------- 8       0       8       64 Allocating a new extent manually: SQL alter table dgomez allocate extent; Table altered. SQL select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ; FILE_ID EXTENT_ID BLOCKS KB ---------- ---------- ---------- ---------- 8     0     8     64 8     1     8     64 It was allocated in the same file? Oracle doesn't allocate extents at a round-robin fashion when we're using " EXTENT MANAGEMENT LOCAL AUTOALLOCATE "?. Perhaps we are missing something here, because Oracle is expensive enough to not have this feature, you know...  Let's create another extent manually: SQL alter table dgomez allocate extent; Table altered. SQL select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ; FILE_ID EXTENT_ID BLOCKS KB ---------- ---------- ---------- ---------- 8     0     8     64 8     1     8     64 8     2     8     64 No, The same behavior, we are not seeing round-robin. Ok, let me try the last time with 13 more extents: Creation of 13 extents (13 iterations): SQL alter table dgomez allocate extent; Table altered. SQL select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ; FILE_ID EXTENT_ID BLOCKS KB ---------- ---------- ---------- ---------- 8 0 8 64 8 1 8 64 8 2 8 64 8 3 8 64 8 4 8 64 8 5 8 64 8 6 8 64 8 7 8 64 8 8 8 64 8 9 8 64 8 10 8 64 8 11 8 64 8 12 8 64 8 13 8 64 8 14 8 64 8 15 8 64 16 rows selected. People could think in this point that with " EXTENT MANAGEMENT LOCAL AUTOALLOCATE " a datafile is filled up first and then another datafile, and so on. But wait, something happens in the extent 16: SQL alter table dgomez allocate extent; Table altered. SQL select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ; FILE_ID EXTENT_ID BLOCKS KB ---------- ---------- ---------- ---------- 8 0 8 64 8 1 8 64 8 2 8 64 8 3 8 64 8 4 8 64 8 5 8 64 8 6 8 64 8 7 8 64 8 8 8 64 8 9 8 64 8 10 8 64 8 11 8 64 8 12 8 64 8 13 8 64 8 14 8 64 8 15 8 64 7 16 128 1024 17 rows selected. , Finally!! Now looks like Oracle is using another datafile in the tablespace. I will let Oracle makes me happy again: 4 iterations: SQL alter table dgomez allocate extent; Table altered. SQL select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ; FILE_ID EXTENT_ID BLOCKS KB ---------- ---------- ---------- ---------- 8 0 8 64 8 1 8 64 8 2 8 64 8 3 8 64 8 4 8 64 8 5 8 64 8 6 8 64 8 7 8 64 8 8 8 64 8 9 8 64 8 10 8 64 8 11 8 64 8 12 8 64 8 13 8 64 8 14 8 64 8 15 8 64 7 16 128 1024 6 17 128 1024 8 18 128 1024 7 19 128 1024 6 20 128 1024 21 rows selected. Fine, our extents started to get allocated at a round-robin fashion. But, does it mean that the round-robin starts in the extent 16? Not at all. it doesn't depend of the Extent 16, we will see this later in the article. Hey Deiby, but, then the extents are not allocated "evenly" (Evenly is ASM's word for sure). -Are you sure? Do you know how many blocks are in each datafile? SQL select file_id, sum(blocks) from dba_extents where file_id in (6,7,8) group by file_id; FILE_ID SUM(BLOCKS) ---------- ----------- 6 256 8 256 7 256 The 15 first extents have 256 Blocks which is the same than extents after the extent 15. Now, let's go back and let's answer the question: is it always after extent 16? No. SQL drop tablespace tbslocal including contents and datafiles; Tablespace dropped. SQL create tablespace tbslocal datafile size 10m, size 10m, size 10m extent management local autoallocate; 2 Tablespace created. SQL create table dgomez (id number, value varchar2(40)) tablespace TBSLOCAL; Table created. SQL select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ; FILE_ID EXTENT_ID BLOCKS KB ---------- ---------- ---------- ---------- 8 0 8 64 Creation of 2M Extent manually: SQL alter table dgomez allocate extent ( size 2m ); Table altered. SQL select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ; FILE_ID EXTENT_ID BLOCKS KB ---------- ---------- ---------- ---------- 8 0 8 64 7 1 128 1024 6 2 128 1024 As you can see Oracle didn't create a 2M Extent, instead of that Oracle created 2 Extents of 1M. But there is another interesting thing, starting in the second extent Oracle created the extents at a round-robin fashion. So it depends of extent 16? No. Depends of the the number of blocks already allocated. I could say that it is after 128 blocks allocated Oracle starts to use Round-robin, but we should research more on it. Let's confirm that now we are using round-robin: SQL alter table dgomez allocate extent ( size 3m ); Table altered. SQL select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ; FILE_ID EXTENT_ID BLOCKS KB ---------- ---------- ---------- ---------- 8 0 8 64 7 1 128 1024 6 2 128 1024 8 3 128 1024 7 4 128 1024 6 5 128 1024 6 rows selected. Confirmed!. Disadvantage: Round-Robin is not so well. Unfortunately Oracle only create extents using round-robin but Oracle is not aware of the size of every filesystem so that Oracle cannot create the extents evenly. For example: Using ASM: With 2 disks (1G, 100G). For every extent created in the 1G Disk, I will see 10 extents created in the 100G Disk. This is very good because every disk regardless its size will have the same percentage of usage. Using Filesystem: with 2 disks (1G, 100G). Indeed you can create 1 datafile in the 1G Disk, and another datafile in the 100G, but oracle always will create 1 extent in each disk, so when the 1G Disk is full the 100G disk will be filled up just at 10%. As always I recommend strongly using ASM for our datafiles. SQL drop tablespace tbslocal including contents and datafiles; Tablespace dropped. SQL create tablespace tbslocal datafile size 10m, size 100m extent management local uniform size 64k; 2 Tablespace created. SQL create table dgomez (id number, value varchar2(40)) tablespace TBSLOCAL; Table created. SQL alter table dgomez allocate extent; Table altered. SQL alter table dgomez allocate extent; Table altered. SQL alter table dgomez allocate extent; Table altered. SQL select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ; FILE_ID EXTENT_ID BLOCKS KB ---------- ---------- ---------- ---------- 7 0 8 64 6 1 8 64 7 2 8 64 6 3 8 64 SQL

Blog Post: Error while apply hrglobal driver

$
0
0
You can follow datainstaller and hrglobal apply process from here Possibile error: Error 1: sqlplus -s APPS/***** @/u01/EBSUAT/apps/apps_st/appl/per/12.0.0/patch/115/sql/hrrbdeib.sql 2 4 Connected. declare * ERROR at line 1: ORA-04091: table HR.FF_USER_ENTITIES is mutating, trigger/function may not see it ORA-06512: at “APPS.FFDICT”, line 1224 ORA-06512: at “APPS.FFDICT”, line 1901 ORA-06512: at “APPS.FF_USER_ENTITIES_BRI”, line 7 ORA-04088: error during execution of trigger ‘APPS.FF_USER_ENTITIES_BRI’ ORA-06512: at “APPS.HRDYNDBI”, line 5446 ORA-06512: at line 42 Solution : HRGLOBAL, HRRBDEIB Failed: ORA-20106: A hrrbdeib worker has failed and ORA-04091: Table HR.FF_USER_ENTITIES is Mutating (Doc ID 357354.1) Error 2: ORA-02291: integrity constraint (HR.FF_ROUTE_PARAMETER_VALUES_FK1) violated – parent key not found Solution : Note 375955.1 – HRGLOBAL Troubleshooting Guide        

Blog Post: delete argument too long linux

$
0
0
When you’re trying to delete files in linux , you receive following error message bash: /bin/rm: Argument list too long Solution : go to that directory and execute below command find . -name ‘filename*’ -print0 | xargs  -0 rm

Blog Post: Convert a Tnsnames.ora File to a Toad Session Import File

$
0
0
Have you ever wanted a quick and easy way of converting all those database entries in your tnsnames.ora file, into something that Toad can use to populate the “sessions” grid? Read on. Normally Toad offers you a drop down list of the various database entries in the tnsnames.ora that is being used, however, if your tnsnames.ora file contains an IFILE entry, then Toad doesn't follow the included file, and any aliases defined there - or in subsequent nested IFILEs - will not appear in the drop down list. You can get around this by connecting to each database in turn and in doing so, this populates the grid of sessions in the "New Sessions" dialogue. However, this process is a tad on the fiddly side and very boring indeed. Therefore I've created a utility that allows you to read a tnsnames.ora file and from that, create a file that can be imported to populate the grid. You will need Toad 11.x or higher to be able to import your connections. Previous versions do not have the ability to export and import the sessions. I tested this with Toad 11.6 which is the oldest version I currently have. The utility is based on the tnsnames_checker that I previously announced. You can find that utility at this location . This utility doesn't make any attempt at semantic validation though, however the lexer or parser may highlight some syntax errors in the tnsnames.ora file. Everything in the tnsnames.ora file which is a database alias entry, will be written to the output file. Output is always to stdout and so, should be redirected to a proper file of your choosing at run time, if you wish to import the results that is. Download and Install Source code is available on GitHub. But you do not need it if you don't intend to build or modify the utility. You need to download the compiled code from this location. It is then a simple case of unzipping it and running the tns2toad.cmd file if you are on Windows, or the tns2toad.sh script if you are on some form of Unix. Your PATH is assumed to contain the location of the java executable. You can check by running the java -version command. If it barfs, you need to sort out your PATH. Java 1.6 (aka Java 6) is the minimum required version of Java. The software has been tested with Oracle's Java 6 and Java 7. It should work with Java 8 as all versions are supposed to be backward compatible, but I have not been able to test it with OpenJDK's version of Java. Tns2toad is a command line utility and you should run it from a DOS or shell session while your current directory is the location where you unzipped it to. The following is a list of files that you should find: antlr-4.4-complete.jar : the ANTLR4 runtime support for the parser section of the code. tns2toad.jar : the runtime support for the actual utility itself. tns2toad.cmd : a batch file for Windows users. tns2toad.sh : a shell script for Linux and/or Unix users. If you need to change the classpath, edit the latter two files as necessary to suit your system. Parameter Details If you run the utility with an invalid parameter, the correct usage details will be displayed, as follows: C:\Software\ANTLR\TNS2Toad\test tns2toad --help Invalid option '--help' Usage: tns2toad filename Options: --oracle_home The default oracle home to be used. --user The default user for all connections. Parameter: filename. The tnsnames.ora file to be parsed. Tns2toad requires the options to be specified first and the mandatory file name last. You cannot mix and match. Options may be specified in any letter case, lower, upper or mixed. The options are as follows, and all are optional: --help : Displays usage details. Technically this is an error, but any incorrect option will display the usage details as shown above. --oracle_home : If you wish, you can set all entries in the generated file to use the same Oracle Home folder. This folder should be specified in full, on the command line. If omitted, the import file will not specify an Oracle Home and Toad will use whatever you have configured as the default when you run the import. If there are spaces or special characters in the path name, wrap the full path in double quotes. Beware, it is not likely that Oracle will work correctly from a folder which has spaces in the path name. --user : If you wish, you can set each and every one of the imported sessions to use the same user name. Obviously, a tnsnames.ora file doesn't have user details, so by default, there will be none. If you use the same user on each (or most) of your connections, specify it here and save some typing later on in life. Running the Utility As mentioned above, the utility reads a tnsnames.ora file and writes a Toad connections export file to stdout so you will need to trap the output and redirect it to a file of your choosing. To run the utility with all defaults set: tns2toad c:\tns_admin\tnsnames.ora c:\myToadSessions.txt That will set OracleHome and User to blank and ConnectAs set to "Normal". To run the utility with a specific Oracle Home for all connections: tns2toad --oracle_home c:\oracle\product\11gr2\client1 c:\tns_admin\my_tnsnames.ora c:\myToadSessions.txt That will set OracleHome to the supplied value for all connections, User will be set to blank and ConnectAs will be set to "Normal". To run the utility with a specific database login for all connections: tns2toad --user system c:\tns_admin\tnsnames.ora c:\myToadSessions.txt That will set OracleHome to blank, User will be set to "system" for all connections and ConnectAs will be set to "Normal". If the user supplied is "sys" then the ConnectAs would be set to SYSDBA . Output File Format Each entry in the output file will resemble the following. There will be one section for each database alias in the input file: [LOGIN1] User=SYS Server=barney AutoConnect=0 OracleHome=c:\oracle\home SavePassword=0 Favorite=0 SessionReadOnly=0 Alias= Host= InstanceName= ServiceName= SID= Port= LDAP= Method=0 Protocol=TNS ProtocolName=TCP Color=8421376 ConnectAs=SYSDBA LastConnect=19600407031549 RelativePosition=0 GUID= There will be some other text at the end of the output which is required for Toad to recognise the file and to import it, but that is not shown here. The following entries are of note: [LOGINn] : this is the section header. The numeric suffix will increase by 1 from 1 for each new entry. There will be one of these sections for each database alias found in the tnsnames.ora file. RelativePosition : this is set to the [LOGINn] value minus 1. The grid starts numbering its entries from zero while the logins start numbering at 1. If your grid is currently sorted into any desired order, the RelativePosition value will be ignored and the entry will be placed in the grid according to your chosen sort order. User : Normally blank and if so, you will be prompted at login to supply a user name and password for the connection. May be populated if you specified the --user option on the command line. Server : this is the alias name, including domain name if present, read from the tnsnames.ora file. In the event that an entry in tnsnames.ora consists of an alias list, each one will get a separate entry in the output file. OracleHome : Normally blank and if so, Toad will use the default Oracle Home at runtime, for the connection. May be populated if you specified the --oracle_home option on the command line. ConnectAs : This will normally be "Normal" but if a --user sys option was specified on the command line, it will change to "SYSDBA" as all SYS connections must be as sysdba . Warning Don't tell Bert if you use SYS though! There is no option available to allow connections as SYSOPER . LastConnect : this is set to something resembling my date and time of birth in YYYYMMDDHHMMSS format. Yes, I am that old! This value should allow you to sort your grid by the Last Connect column and keep all the new tnsnames entries at the bottom. Until you need them of course. Color : Similar to LastConnect above, this is set purely to separate the imported entries from the ones you added yourself. As far as I am aware, no-one in the world actually likes the teal colour - except a company I used to work for that is, sadly now no longer in business - so it should be safe enough to assume that it will indeed help keep the imported entries separate from your manually entered ones. Obviously, passwords are not part of a tnsnames.ora file, so the utility is unable to set those up for you. Equally, these are encrypted based on your login to your computer amongst other things, and so, it's practically impossible;e for tns2toad to be able to set passwords. And finally, at least for the UK, no, I haven't spelt favorite or color wrongly, Toad has! But that's how it has to be when dealing with " foreigners " Importing the Results The following applies to Toad 11.6 because that's how I tested it, later versions may be slightly different. Older versions will most likely not have the ability to import connection details, however, I have a plan ....see below. Start Toad. Click Session - New Connection When the dialogue appears, there will be two buttons showing icons resembling a 3.5" floppy disc, with a blue arrow - like the ones you can see somewhere close to here, over on the right. You want the one with the arrow pointing out of the disc. Hover over the icon and it should pop up a hint that says "import". Click it. On the subsequent "Connections Import File" dialogue, navigate in the usual manner to the location where you saved your file. Select it, and click the "open" button. After a short delay, the grid should be showing all the new connections. If your grid is sorted by the Last Connect column, the new connections will be added at the bottom. Strangely, on my Toad at least, the date doesn't appear but maybe it's because it's such a long long time ago! Ahem! No, it's because there was a bug in the LastConnect value, it had two extra digits! This has been fixed. My grid now looks like the following, with the newly imported sessions nicely collected at the bottom. Deleting Extraneous Entries In the unlikely event that you imported some sessions that you really do not need, simply select them (click, CTRL-click etc) and press the DEL key to delete the unwanted ones. Did I mention a Plan? Prior to Toad 11.x, it wasn't possible to import connections. There is a way to get around this, but I can't accept responsibility for foul ups and I have not tested this method. The format and content of the connections export file and the connections.ini file are remarkably similar. You need to shut down Toad, and then find the user files location on your PC and, with Toad closed , copy the connections.ini file to a safe place. Open the connections.ini file and replace the contents with the contents of the file generated by tns2toad . Before saving the file, scroll to the bottom and remove only the following line: Split file here. CONNECTIONS.INI above, CONNECTIONPWDS.INI below Save the file and exit. When you start Toad, the list of connections on the grid should be set to the ones you generated - without any passwords. You could try appending the generated file contents to the existing connections.ini file but note that you will/may have to renumber the various section headers - I do not know how Toad copes when you import two connections with the same LOGINn name. Enjoy.

Blog Post: Adding Oracle Solaris 11.2 Host to Oracle EM 12c - using offline Patch

$
0
0
I was trying to add Oracle Solaris 11.2 Host in Oracle EM 12cR4. The OMS Server is Installed on Linux-64bit. There was no agent available to deploy it on the Solaris host. In this article will demonstrate how to install download and install the unavailable agent software on EM12c OMS Server in an offline mode. Login to EM12c console and navigate: Setup selfupdate Agent Software It will list down all available agent software's.  If the agent software is available then you need to download the required patch and apply it in a offline mode. Here for Solaris 11 agent software is available but not applied. - To apply this patch on OMS download the patch "18797137" from MOS and update it in the OMS software library using emcli command in offline mode. - Upload the patch to Server accessible to OMS user. - Login to "sysman" user using emcli command [oracle@oem12c sw_home]$ /u01/em12c_home/oms/bin/emcli login -username=sysman Enter password Login successful -  Upload patch to EM in offline mode [oracle@oem12c sw_home]$ /u01/em12c_home/oms/bin/emcli import_update -omslocal -file=/u01/sw_home/p18797137_112000_Generic.zip Processing update: Agent Software - Agent Software (12.1.0.4.0) for Oracle Solaris on x86-64 (64-bit) Successfully uploaded the update to Enterprise Manager. Use the Self Update Console to manage this update. [oracle@oem12c sw_home]$ -  Click on self-update page - Now the status of Agent software changed from "Available" to "Downloaded" - If we click on Apply then this patch will be available in the agent software library and will be ready to deploy on host - The Jobs has been completed successfully - The Status of agent software now changed to "Applied" and ready for host deployment. - Now when we are trying to add host it will list the agent software whether its available or not. In above screen for IBM machine the agent software is not available to be deployed. - Create user, directory and required privileges on agent machine. root@soltest1:~# useradd -d /export/home/oemagent -m oemagent 80 blocks root@soltest1:~# mkdir -p /u01/oemagent root@soltest1:~# chmod -R 775 /u01/oemagent/ root@soltest1:~# chown -R oemagent:staff /u01/oemagent/ root@soltest1:~# - Need to provide all required inputs to deploy the agent. - Login to agent host as root user and execute the above listed scripts manually. root@soltest1:/u01# /u01/oemagent/core/12.1.0.4.0/root.sh Finished product-specific root actions. creating /var/opt/oracle Creating /var/opt/oracle/oragchomelist file... root@soltest1:/u01# /export/home/oemagent/oraInventory/orainstRoot.sh Changing permissions of /export/home/oemagent/oraInventory Adding read,write permissions for group,Removing read,write,execute permissions for world. Changing groupname of /export/home/oemagent/oraInventory to staff. The execution of the script is complete root@soltest1:/u01# - Check the status of the agent: oemagent@soltest1:/u01/oemagent/agent_inst/bin$ ./emctl status agent Oracle Enterprise Manager Cloud Control 12c Release 4 Copyright (c) 1996, 2014 Oracle Corporation. All rights reserved. --------------------------------------------------------------- Agent Version : 12.1.0.4.0 OMS Version : 12.1.0.4.0 Protocol Version : 12.1.0.1.0 Agent Home : /u01/oemagent/agent_inst Agent Log Directory : /u01/oemagent/agent_inst/sysman/log Agent Binaries : /u01/oemagent/core/12.1.0.4.0 Agent Process ID : 5186 Parent Process ID : 5180 Agent URL : https://soltest1.oralabs.com:3872/emd/main/ Local Agent URL in NAT : https://soltest1.oralabs.com:3872/emd/main/ Repository URL : https://oem12c.oralabs.com:4903/empbs/upload Started at : 2014-10-22 06:30:19 Started by user : oemagent Operating System : SunOS version 5.11 (amd64) Last Reload : (none) Last successful upload : 2014-10-22 06:46:01 Last attempted upload : 2014-10-22 06:46:01 Total Megabytes of XML files uploaded so far : 0.36 Number of XML files pending upload : 0 Size of XML files pending upload(MB) : 0 Available disk space on upload filesystem : 94.64% Collection Status : Collections enabled Heartbeat Status : Ok Last attempted heartbeat to OMS : 2014-10-22 06:47:32 Last successful heartbeat to OMS : 2014-10-22 06:47:32 Next scheduled heartbeat to OMS : 2014-10-22 06:48:32 --------------------------------------------------------------- Agent is Running and Ready oemagent@soltest1:/u01/oemagent/agent_inst/bin$ The agent deployment on solaris 11.2 Host completed successfully. Thanks for reading. regards, X A H E E R

Blog Post: Support Id : Autoconfig EBS

$
0
0
You want to willing to know more details about Autoconfig Using AutoConfig to Manage System Configurations in Oracle E-Business Suite Release 12 (Doc ID 387859.1)  

Blog Post: Adding swap space to Solaris 11.2

$
0
0
I was Installing Oracle Database 12c on Oracle Solaris 11.2 and the pre-requisites for database Installation failed for SWAP memory requirement. The configured swap memory on the server is 1GB and the Physical memory of the server is 4GB. There should be at least 4GB of SWAP memory configured to avoid this error. This is article will help to Increase the SWAP memory of the system without any downtime. We can add additional SWAP memory online using zfs commands. - Identify the current volume using for SWAP: root@soltest1:~# swap -l swapfile dev swaplo blocks free /dev/zvol/dsk/rpool/swap 303,1 8 2097144 2097144 - Identify the size of the current SWAP space: root@soltest1:~# zfs get volsize rpool/swap NAME PROPERTY VALUE SOURCE rpool/swap volsize 1G local - Set new size for SWAP volume root@soltest1:~# zfs set volsize=5g rpool/swap root@soltest1:~# zfs get volsize rpool/swap NAME PROPERTY VALUE SOURCE rpool/swap volsize 5G local root@soltest1:~#  Set new size for SWAP volume  Its very simple in Solaris 11 to re-size the SWAP volume. regards, X A H E E R

Blog Post: Oracle RAC One Node Migration

$
0
0
  Not always do you require a RAC database and want to spend extra dollars for RAC licenses for your extra cores, you might actually be having a small application that would only requires a single instance database based on its workload, but you already have a grid infrastructure installed for some other large databases in RAC. In this scenario, we can use the existing grid infrastructure and still have high availability features for your single instance database with lesser cost compared to a 2-node RAC cost. This article explains how to convert a single instance database into a RAC one database.   Oracle RAC One Node has been changed starting from Oracle Database 11.2.0.2. Before this version it was shipped with patch 9004119 that contained Oracle RAC One Node scripts and utilities.   From version 11.2.0.2 Oracle RAC One Node:   ·          Is shipped with Oracle Database software; no additional one-off patches are required. ·          can be completely managed by srvctl; all specific scripts from previous versions were deprecated.   Migration to Oracle RAC One Node 1 Conversion to RAC One Node from Single Instance RAC   [oracle@ajithpathiyil1 ~]# DB_NAME= db_name   For example [oracle@ajithpathiyil1 ~]# DB_NAME=racone   Ensure you have only one instance in your database [oracle@ajithpathiyil1 ~]srvctl config database -d $DB_NAME | grep "Database instances:"   Ensure that you have a database initialization file in the Oracle Home with the only spfile parameter pointing to the spfile on ASM.   [oracle@ajithpathiyil1 ~]cat $ORACLE_HOME/dbs/init${DB_NAME}.ora   Make links to the database initialization file for RAC One Node instances   [oracle@ajithpathiyil1 ~]cd $ORACLE_HOME/dbs [oracle@ajithpathiyil1 ~]ln -s init${DB_NAME}.ora init${DB_NAME}_1.ora [oracle@ajithpathiyil1 ~]ln -s init${DB_NAME}.ora init${DB_NAME}_2.ora Ensure that you have a database password file in the Oracle Home   [oracle@ajithpathiyil1 ~]ls -l $ORACLE_HOME/dbs/orapw${DB_NAME}   Make links to the database password file for RAC One Node instances   [oracle@ajithpathiyil1 ~]cd $ORACLE_HOME/dbs [oracle@ajithpathiyil1 ~]ln -s orapw${DB_NAME} orapw${DB_NAME}_1 [oracle@ajithpathiyil1 ~]ln -s orapw${DB_NAME} orapw${DB_NAME}_2   Convert the database to the RAC One Node   [oracle@ajithpathiyil1 ~]srvctl convert database -d $DB_NAME -c RACONENODE -i $DB_NAME -w 5   -w - is a timeout in minutes determining how much time the source Oracle instance waits for transactions to complete before shutdown Check that conversion is successful   [oracle@ajithpathiyil1 ~]srvctl config database -d $DB_NAME   Example Output Database unique name: racone Database name: Oracle home: /u01/app/oracle/product/11.2.0.3/db_2 Oracle user: oracle Spfile: +data_1/RACONE/PARAMETERFILE/spfile.50125.787548439 Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: racone Database instances: Disk Groups: DATA_1 Mount point paths: /u01/app/oracle/product/11.2.0.3/db_2,/u01/app/oracle/product/11.2.0.3/db_1 Services: racone_sql Type: RACOneNode Online relocation timeout: 5 Instance name prefix: racone Candidate servers: ajithpathiyil1 Database is administrator managed   2 Relocation of the RAC One Node instance To relocate a RAC One Node to another node issue the following command   [oracle@ajithpathiyil1 ~]srvctl relocate database -d $DB_NAME -n hostname -v   For example [oracle@ajithpathiyil1 ~]srvctl relocate database -d $DB_NAME -n ajithpathiyil2 -v   Example output Configuration updated to two instances Instance racone_2 started Services relocated Waiting for up to 5 minutes for instance racone_1 to stop ... Instance racone_1 stopped Configuration updated to one instance       3 Troubleshooting There are possible problems of converting the databases.   Problems during relocation of services Symptoms: You may encounter the following errors during relocation   PRCS-1011 : Failed to modify server pool RACONE CRS-2736: The operation requires stopping resource 'ora.racone.db' on server 'ajithpathiyil1' CRS-2736: The operation requires stopping resource 'ora.racone.racone_oacore.svc' on server 'ajithpathiyil1' CRS-2738: Unable to modify server pool 'ora.RACONE' as this will affect running resources, but the force option was not specified   PRCD-1222 : Online relocation of database "RACONE" failed but database was restored to its original state PRCR-1106 : Failed to relocate resource ora.racone.racone_oacore.svc from node ajithpathiyil1 to node ajithpathiyil2 PRCR-1089 : Failed to relocate resource ora.racone.racone_oacore.svc. CRS-2800: Cannot start resource 'ora.racone.db' as it is already in the INTERMEDIATE state on server 'ajithpathiyil2' CRS-2731: Resource 'ora.racone.racone_oacore.svc' is already running on server 'ajithpathiyil1'   Solution: Remove the database from CRS and register it as RAC One Node instead of conversion.   Save the database configuration [oracle@ajithpathiyil1 ~]srvctl config database -d $DB_NAME [oracle@ajithpathiyil1 ~]srvctl getenv database -d $DB_NAME   Remove the database from CRS [oracle@ajithpathiyil1 ~]srvctl remove database -d $DB_NAME -v   Example output Remove the database racone? (y/ ) y Successfully removed database and its dependent services.       Add the database as RAC One Node to CRS [oracle@ajithpathiyil1 ~]srvctl add database -d $DB_NAME -o $ORACLE_HOME -p path_to_the_spfile -c RACONENODE -e node1 , node2 ,... -i $DB_NAME -w 5 -t IMMEDIATE -j " acfs_oracle_home1 ,acfs_oracle_home2,..." -a " data_group1 , data_group2 ,..." For example [oracle@ajithpathiyil1 ~]srvctl add database -d racone -o /u01/app/oracle/product/11.2.0.3/db_2 -p +data_1/RACONE/PARAMETERFILE/spfile.50125.787548439 -c RACONENODE -e ajithpathiyil1,ajithpathiyil2 -i racone -w 5 -t IMMEDIATE -j "/u01/app/oracle/product/11.2.0.3/db_1,/u01/app/oracle/product/11.2.0.3/db_2" -a "data_1"   Start database, if it is not started [oracle@ajithpathiyil1 ~]srvctl start database -d racone   Add required services. Do NOT specify preferred and available instances. [oracle@ajithpathiyil1 ~]srvctl add service -d $DB_NAME -s service_name [oracle@ajithpathiyil1 ~]srvctl start service -d $DB_NAME   Restore saved environment variables with srvctl setenv.   Problems during starting the second RAC One Node instance Symptoms: The relocation process finishes without errors but the database is down.   The alert log of the source instance contains the following error:   Errors in file /u01/misc/diag/rdbms/racone/racone_1/trace/racone_1_rms0_15424.trc: ORA-19815: WARNING: db_recovery_file_dest_size of 1073741824 bytes is 100.00% used, and has 0 remaining bytes available. ************************************************************************ You have following choices to free up space from recovery area: 1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard, then consider changing RMAN ARCHIVELOG DELETION POLICY. 2. Back up files to tertiary device such as tape using RMAN BACKUP RECOVERY AREA command. 3. Add disk space and increase db_recovery_file_dest_size parameter to reflect the new space. 4. Delete unnecessary files using RMAN DELETE command. If an operating system command was used to delete files, then use RMAN CROSSCHECK and DELETE EXPIRED commands. ************************************************************************ ORA-19809 signalled during: alter database add logfile thread 1 SIZE 1073741824 , SIZE 1073741824 , SIZE 1073741824... Cause: The RAC database contained only 1 redo thread before conversion. RAC One Node requires two threads for relocation. It creates the second thread automatically, but may fail to do it due to the inappropriate database parameters. Solution: Provide enough space for online redo logs:   1)       increase db_recovery_file_dest_size   SQL alter system set db_recovery_file_dest_size=10G scope=both sid='*';   2)       remove expired archive logs from RMAN repository   [oracle@ajithpathiyil1 ~]rman target / crosscheck archivelog all; delete expired archivelog all;   4 Enabling TAF You can enable TAF for some services that will be used with applications supporting it. Note: Oracle E-Business Suite does NOT support TAF. srvctl modify service -d $DB_NAME -s service_name -P BASIC -m BASIC -e [SELECT|SESSION] -z 10 -w 10     where -e – failover type SESSION – only a session will be moved to the new instance, SELECT – current fetches will also be continued on the new instance Note: With SELECT option TAF can restart a query after failover has completed but for other types of transactions, such as INSERT, UPDATE, or DELETE, the application must rollback the failed transaction and resubmit the transaction. You must also reexecute any session customizations, in other words, ALTER SESSION statements, after failover has occurred. -z – failover retries -w – failover delay between retries in seconds For example [oracle@ajithpathiyil1 ~]srvctl modify service -d racone -s racone_sql -P BASIC -m BASIC -e SELECT -z 10 -w 5   HAPPY LEARNING!     Ajith Narayanan ​ Oracle ACE Associate  Leader - AIOUG - Oracle RAC SIG Ex-Website Chair (2011 - 2013) :   http://www.oracleracsig.org Blog:  http://oracledbascriptsfromajith.blogspot.com LinkedIn , Facebook , Twitter +91 9008488882    

Blog Post: Tim Hall: PL/SQL presenter extraordinarie

$
0
0
If it's PL/SQL, I'm interested. And if it's PL/SQL involved in winning some sort of award, well, I am downright excited . So I was very pleased indeed to see that Tim Hall of Oracle-Base fame won the UK Best Speaker award at UKOUG 2014 for his talk on Improving the Performance of PL/SQL Function Calls from SQL . Tim is an engaging writer and speaker, but of course that's not while he won the award. It was because the topic he chose was so incredibly important and exciting. Still, I thought it would at least be polite to say to Tim: Congratulations, Tim! Keep up the great work! Tim is one of those Oracle experts (and an ACE Director to boot) who is incredibly generous with his time and knowledge, as anyone who has spent any time on Oracle-Base.com will know. He can also be quite hilarious, as you can tell from his recent blog post on UKOUG 2014 . Here are links to the slide deck, demo scripts and an article all focused on improving the performance of PL/SQL function calls from SQL: Slides:  http://oracle-base.com/workshops/efficient-function-calls/EfficientFunctionCalls.ppt Demos:  http://oracle-base.com/workshops/efficient-function-calls/demos.zip Article:  http://oracle-base.com/articles/misc/efficient-function-calls-from-sql.php

Blog Post: Three New Members of the Oracle Database Evangelist Team

$
0
0
A long, long time ago....I announced that I had been given the honor of assembling a team of evangelists, whose job would be to promote Oracle Database as an application development platform. In other words, make sure that current and future users fully leverage all the amazing features for developers that are baked into Oracle Database, such as SQL, PL/SQL, Oracle Text, Oracle Spatial, edition-based redefinition and more. I am very pleased to announce that my team has now swelled dramatically from one person (me) to four, with one more to come on board in early 2015. I will make a more "formal" announcement of our team and our plans in Q1 2015, but for now, I did want to share the joyful feeling I feel. Drum roll, please.... Todd Tricher, Community Manager Todd got his start at Oracle working with Partners in Alliances. For over a decade he has been focused on technology outreach, working closely with development to drive "grass-roots" engagements in both Oracle and open source developer communities.  He is passionate about family and building community, loves meeting new people and learning new technologies, while sharing what he's learned with others. Natalka Roshak, SQL Evangelist It all started with an innocuous student job as a data analyst. It wasn't long before Natalka was firmly hooked on SQL. Since then she's been a developer, a DBA, a RAC guru, and sometimes all of the above at once. She's excited to have an opportunity to share her passion for and knowledge about SQL with others, especially those new to  relational  technology. Dan McGhan,  Javascript/HTML5 Evangelist Dan suffers from Compulsive Programming Disorder, which is believed to be linked to his balding. Having started his development career in the land of MySQL and PHP, he was only too happy to have stumbled upon Oracle Application Express. Since then, he’s dedicated his programming efforts to learning more about Oracle and web based technologies in general. These days he enjoys sharing the passion he's developed for JavaScript and HTML5 with others. Dan shared his decision to join Oracle on his blog. Read more here .

Blog Post: Cómo seleccionar el mejor hardware para tu base de datos

$
0
0
A lo largo de nuestra carrera como DBAs dentro de una organización, más de una vez nos toparemos ante la necesidad de adquirir equipamiento para alojar la base de datos que administramos. Ya sea que estamos comprando hardware para una nueva base de datos o para reemplazar equipos existentes, estaremos ante la situación de tener que seleccionar o recomendar lo que consideremos que es la mejor solución para la organización en la que nos desempeñamos. A partir de alli, veremos que el mercado nos ofrecerá distintas alternativas. Y, seguramente, cada proveedor nos hablará de las excelentes bondades de su propia solución. Y nos dará miles de argumentos para convecernos de que se su propia solución es la mejor de todas. Sin embargo, no podemos quedarnos solamente con los argumentos de los proveedores. Necesitamos analizar y comparar por nuestros propios medios. La pregunta que surge es: de todas las opciones que tengo, ¿cuál es la más conveniente para la organización en la que yo trabajo? No está bueno responder a una pregunta con más preguntas; pero en este caso es casi imprescindible hacernos nuevos cuestionamientos: - ¿Cuál es la mejor solución desde el punto de vista económico? Esto es fácil de responder. Casi todos estaremos de acuerdo en que la mejor alternativa desde el punto de vista económico será la que tenga el menor precio. Lamentablemente, el precio no siempre va de la mano de la calidad o el rendimiento. Y por lo tanto no podemos decidir sólo en base al precio. Aparece entonces la siguiente pregunta: 2) ¿Cuál es la mejor solución desde el punto de vista técnico? Responder a esta nueva pregunta no es tan sencillo. A la hora de evaluar técnicamente una propuesta de hardware son muchos los elementos a considerar. Un método que podemos utilizar es el de construir una matriz en donde colocaremos todos los atributos y prestaciones que consideramos importantes: capacidad de procesamiento, memoria, capacidades de los discos, capacidad de escalamiento vertical, consumos de energía, etc. Cada uno de los puntos a considerar no tendrán el mismo nivel de importancia, algunos tendrán más "peso" que otros, y por eso es importante que utilicemos algun factor de ponderación. Luego iremos completando nuestra matriz con los datos de surgen de las propuestas de los diferentes proveedores. Internet también es una fuente de información que puede resultar valiosa. Un sitio que me parece útil y  valioso es TPC.org (aunque no siempre tiene información totalmente actualizada). ¿Qué es TPC.org? TPC es un organismo que difunde de manera objetiva información de rendimiento de diferentes soluciones de base de datos. ¿Cómo obtiene TPC dichos datos objetivos? Lo hace a través de benchmarks, es decir, a través de técnicas de medición de rendimiento. Y los resultados son publicados en su sitio web. Y tú, ¿qué metodología utilizas a la hora de seleccionar y recomendar equipamiento para tu base de datos?

Comment on Three New Members of the Oracle Database Evangelist Team

$
0
0
Congratulations! And do not hesitate to have "Comunidad Oracle Hispana" community to spread your initiatives. Fernando.

Blog Post: The Hidden Benefit of PeopleSoft Selective Adoption

$
0
0
There has been a lot of talk over the last couple of weeks about PeopleSoft Selective Adoption, the recently-coined term for the PeopleSoft Update Manager delivery model. Much of this has been on the direct benefits to the customer, which is how it should be. Greg Parikh has linked to some of the posts on LinkedIn . While discussing this with a colleague at the recent Apps14 conference we noticed that there is another implication that I’ve not seen anyone else call out yet. Although at first glance it seems an immediate advantage to Oracle it’s not difficult to see how the customer is also going to reap significant rewards. Getting everyone onto 9.2, and then delivering innovation on that version means that PeopleSoft development can operate on a single codeline . Currently, a legislative update will have to be coded and applied for all supported releases (and each version might require the update to be different, depending upon the underlying pages), meaning a lot of extra complication and repeat work. A Global Payroll update might need to be created for v9.2, v9.1 and v9.0, for instance, which is a significant burden. Once updates are only being created on the v9.2 codeline then they only have to be done once, saving development staff time (and support staff a lot of troubleshooting time also) and thereby freeing them up to concentrate much more time on delivering extra value to the customers in the way of faster updates and more innovative new functionality. This can only be a big plus in the long-run.

Blog Post: Resolving Latch Contention

$
0
0
What are latches? Latches are serialization mechanisms that protect areas of Oracle’s shared memory (the SGA).  In simple terms latches prevent two processes from simultaneously updating - and possibly corrupting - the same area of the SGA. Oracle sessions need to update or read from the SGA for almost all database operations.  For instance: When a session reads a block from disk, it must modify a free block in the buffer cache and adjust the buffer cache LRU (Least Recently Used) chain. When a session reads a block from the SGA, it will modify the LRU chain. When a new SQL statement is parsed, it will be added to the library cache within the SGA. As modifications are made to blocks, entries are placed in the redo buffer. The database writer periodically writes buffers from the cache to disk (and must update their status from “dirty” to “clean”). The redo log writer writes entries from the redo buffer to the redo logs. Latches prevent any of these operations from colliding and possibly corrupting the SGA.   How latches work Because the duration of operations against memory is very small (typically in the order of nanoseconds) and the frequency of latch requests very high, the latching mechanism needs to be very light-weight.   On most systems, a single machine instruction called “test and set” is used to see if the latch is taken (by looking at a specific memory address) and if not, acquire it (by changing the value in the memory address). If the latch is already in use, Oracle can assume that it will not be in use for long, so rather than go into a passive wait (e.g., relinquish the CPU and go to sleep) Oracle will retry the operation a number of times before giving up.  This algorithm is called acquiring a spin lock and the number of “spins” before sleeping is controlled by the Oracle initialization parameter “_spin_count”. The first time the session fails to acquire the latch by spinning it will attempt to awaken after a millisecond or so.  Subsequent waits will increase in duration and in extreme circumstances may reach 100s of milliseconds.   In a system suffering from intense contention for latches, these waits will have a severe impact on response time and throughput. Figure 1 uses Spotlight’s wait histogram display to show how latch waits typically last only a millisecond or two.  Figure 1 Latch wait durations shown in Spotlights wait histogram display Causes of latch contention The latches that most frequently affect performance are those protecting the buffer cache, areas of the shared pool and the redo buffer. Library cache and shared pool latches :  These latches protect the library cache in which sharable SQL is stored.  In a well defined application there should be little or no contention for these latches, but in an application that uses literals instead of bind variables (for instance “WHERE surname=’HARRISON’” rather that “WHERE surname=:surname”, library cache contention is common.   Redo copy/redo allocation latches :  These latches protect the redo log buffer, which buffers entries made to the redo log.   These latches were a significant problem in earlier versions of Oracle, but are rarely encountered today.   Cache buffers chain latches : These latches are held when sessions read or write to buffers in the buffer cache. There are typically a very large number of these latches each of which protects only a handful of blocks. Contention on these latches is typically caused by concurrent access to a very “hot” block and the most common type of such a hot block is an index root or branch block (since any index based query must access the root block). Detecting latch contention Oracle’s wait interface makes it relatively easy to detect latch contention and – from 10g onwards – to accurately identify the specific latch involved.   In 10 and 11g, each latch has it’s own wait category if waits on the specific latch become significant then we can deduce a latch contention problem.    In Spotlight general latch alarms result in the server processes changing color and a general latch alarm firing (Figure 2). Figure 2 The Spotlight latch contention alarm As often as not, latch contention will be associated with other symptoms that can help diagnose the root cause of the problem.  For instance,  in Figure 3 we see that a high shared pool miss rate and shared pool lock alarm are also current.  These alarms are typical of a system in which a high rate of dynamic (non sharable) SQL is causing shared pool or library cache latch contention. Figure 3 Latch alarms are usually associated with other symptoms In the “bad old days”,  we often had to use ratio based techniques to determine which latch was causing a problem, since Oracle collated all latch waits into a single category.  Since we didn’t know what latch was responsible for the greatest amount of waits, we would typically examine “miss” and “sleep” rates.   A “miss” occurs when a session cannot immediately obtain a latch.  A sleep occurs when the session cannot obtain the latch even when retrying to the value of the parameter “_spin_count”.  These values can be found in the V$LATCH table or  in the Spotlight Latches drill down (Figure 4).  You may still need to refer to this data when dealing with latch contention in Oracle 9i and earlier.  In 10g and 11g, you can use the wait statistics (Figure 1) to determine the latch which is resulting in the most impact. Figure 4 The Spotlight latch Drilldown It’s a valid assumption that the latch with the most sleeps is contributing to the most latch free waits.  However,  the “latch miss rate” – the metric most commonly used to identify latch contention in the past – is not an accurate measure of latch contention. Tuning the application to avoid latch contention There are some things we can do within our application design that can reduce contention for latches. Using bind variables As noted earlier, failure to use bind variables within an application is the major cause of library cache and/or shared pool latch contention.   All Oracle applications should make use of bind variables whenever possible. However, all is not lost if you are unable to modify your application code.  You can also try the “ CURSOR_SHARING ” parameter to cause Oracle to modify SQL on the fly to use bind variables.   A setting of FORCE causes all literals to be converted to bind variables.  A setting of SIMILAR causes statements to be rewritten only if it would not cause the statements execution plan to vary (which can happen if there are histogram statistics defined on a column referenced in the WHERE clause). CURSOR_SHARING is one of the few silver bullet parameters that can instantly improve performance.  Figure 5 shows how performance changed on my latch constrained system when I changed CURSOR_SHARING to FORCE (using Spotlights Parameters page).   On changing the parameter (at 4:10pm) the execution rate more than doubled, latch contention was eliminated and my SQL Area miss rate halved.  Figure 5 The cursor_sharing parameter can be a silver bullet for library cache/shared pool latch contention Dealing with cache buffer chains contention Cache buffers chains latch contention is one of the most intractable types of latch contention.  There are a couple of things you can do at the application level to reduce the severity of this type of contention. I always recommend that the application workload be optimized before dealing with contention issues (see for instance SystematicOracletuning.pdf ). But in the case of cache buffer chains contention it is particularly important. Cache buffer chains contention occurs most often because of very high logical read rates on a relatively small number of database blocks.  A common cause of this phenomenon is SQL which repeatedly and unnecessarily  reads the same blocks over and over again.  So first, identify the SQL that is associated with the most cache buffer chains latch activity, and see if that SQL should be tuned.   Spotlight's waiting events Screen (Figure 6) can be used to find the SQL concerned and - of course - SQL optimizer will provide you with options for tuning the SQL. Figure 6: Finding the SQL associated with a latch free wait If you are satisfied that the SQL is optimized but you still have a cache buffer chains latch contention problem, try and identify the blocks that are “hot".   Metalink note 163424.1 “How to Identify a Hot Block Within The Database” describes how to do this. Having identified the identity of the hot block, you may find that it is an index root or branch block.  If this is the case, there are two application design changes that may help. Consider partitioning the table and using local indexes.  This might allow you to spread the heat amongst multiple indexes (you will probably want to use a hash partition to ensure an even spread of load amongst the partitions). Consider converting the table to a hash cluster keyed on the columns of the index.  This allows the index to be bypassed completely and may also result in some other performance improvements.   However, hash clusters are suitable only for tables of relatively static size, and determining an optimal setting for the SIZE and HASHKEYS storage parameters are essential. If the block is a table block that just happens to be very heavily accessed, then perhaps partitioning can still help by spreading the load across multiple partitions. However, if it actually a single row that is hot, then you may need to review your application design.  Alternatively, you can try adjusting the _spin_count parameter (as discussed below). Is latch contention inevitable? While conducting performance tuning consultancies or visiting customer sites over the years I have noticed that the most highly optimized databases running on the most high end hardware seem to be the ones that suffer most significantly from latch contention. It would appear that as we remove all other constraints on databases performance, contention for latches becomes the ultimate limiting factor on database throughput.   Imagine we have a perfectly tuned application:  we have allocated sufficient memory to the SGA and have a sufficiently low latency IO sub-system that waits for IO are negligible.  CPU is abundant and exceeds the demands of the application.  When we reach this highly desirable state the database will be doing almost nothing but performing shared memory accesses and hence latches – which prevent simultaneous access to the same shared memory areas - will become the limiting factor. So it may be that some degree of latch contention – possibly on the cache buffers chains latch – has to be accepted in very high volume systems running on extremely powerful hardware. Investigating spin_count Back when I started working with Oracle (Oracle version 5 if you must know ), the spin count parameter  (or latch_spin_count ) was a documented parameter and many DBAs attempted to adjust it to resolve latch contention.  However, ever since Oracle8i the parameter is been “undocumented”: it does not appear in v$parameter and is not documented in the Oracle reference manual - it's now the hidden parameter " _spin_coun t".  Why did Oracle do this? The official Oracle Corporate line is that the value of spin_count is correct for almost all systems and that adjusting it can cause degraded performance.  For instance Metalink  Note:30832.1  says: “If a system is not tight on CPU resource SPIN_COUNT can be left at higher values but anything above 2000 is unlikely to be of any benefit.”.   However,  I think the real reason is that Oracle was unable to provide good guidance on the correct value for _spin_count and therefore decided that - since adjusting it was just as likely to cause harm as to improve performance - the best option was to address latch contention through other means. What I've found is that that higher values of _spin_count can relieve latch contention in many circumstances and I think Oracle depreciated the parameter incorrectly. But, I do think it's critical that any adjustment to _spin_count be performed with some valid measurement structure in place so that you can determine if the change of parameter has had a beneficial effect. Oracle set the default value of spin_count to 2000 in Oracle7.  Over the subsequent 10 or so years,  CPU clock speed has increased by more than a factor of 10.  This means that Oracle systems are spending a decreasing amount of time trying to obtain the latch before dropping into a sleep.  So it is arguable that the default value of spin_count should have been increased with each release of Oracle. I conducted some experiments into the effect of adjusting spin count on the performance of a system suffering from heavy latch contention. For my research, I created a simulation in which severe latch contention was induced on an 11g database.   I then adjusted _spin_count programmatically across a wide range of values and recorded the impact on database throughput, latch waits and CPU utilization.  Figure 7 summarizes the relation ship between database throughput (as measured by the number of SQL statement executions per second), the amount of time spent in latch waits and the CPU utilization of the system (as measured by the CPU run queue). The data indicates that as _spin_count increased, waits for latches reduced as CPU utilization increased.  As CPU utilization saturated (an average run queue per processor of one or more) improvements in throughput and reduction in latch free time reduced. Note that the optimal value for spin_count in this simulation was somewhere in the vicinity of 10,000 – 5 times the default value provided by Oracle.  Throughput had increased by about 80% at this value. Figure 7 Relationship between spin count, CPU, latch waits and throughput Clearly, manipulating the value of _spin_count can result in very significant reductions in latch free waits and improve the throughput of latch constrained applications.  As an undocumented parameter, many DBAs will be reluctant to manipulate _spin_count .  However, if faced with intractable latch contention  manipulating spin_count may be the best  available option for improving database throughput. _spin_count should only be adjusted when there are available CPU resources on the system.  Specifically, if the average CPU Queue length is approaching or greater than 1, then increasing _spin_count is unlikely to be effective. Note that as of 9iR2, you can use the undocumented _latch_class_* parameters to change the spin count for individual latches, which might be desirable in certain unusual circumstances. Adjusting your spin count with Spotlight Spotlight on Oracle incorporates an automated tuning module that will attempt to establish the optimal value for spin count on your system.   Spotlight will experiment with various values of spin count and determine which value results in the best throughput on your system.  Built in constraints – which you can configure – prevent Spotlight from continuing if any performance degradation is encountered.   Figure 8: Finding the optimal value for _spin_count with Spotlight Conclusion Latches protect areas of Oracle shared memory from concurrent access in roughly the same way that locks protect data in tables.   When a session wants a latch it will repeatedly attempt to obtain the latch until reaching the value of " _spin_count " after which it will sleep and a "latch free" wait will occur.  Excessive latch sleeps can create restrictions on throughput and response time. The two most frequently encountered forms of latch contention in modern Oracle (10g/11g) are: Library cache/shared pool latch contention.  This is usually caused when an application issues high volumes of SQL which are non-sharable due to an absence of bind variables.  The CURSOR_SHARING parameter can often be used to alleviate this form of contention. cache buffer chains contention. This is usually associated with very high logical read rates and "hot" blocks within the database (sometimes index blocks).   After tuning SQL to reduce logical IO and eliminate repetitive reads of the same information, partitioning is often a possible solution.  If latch contention is causing serious problems, and the system has some free CPU capacity, adjusting the value of the undocumented parameter _spin_count may be effective in reducing contention.  As always, modifying undocumented parameters should be approached with great caution. Spotlight has a variety of alarms, advice and diagnostic screens for identifying and diagnosing latch contention.   This includes a capability of establishing the optimum _spin_count value for a particular workload.  
Viewing all 4975 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>