Friday, November 30, 2012

ORA-01090

The first screen print below shows 3 sessions on the same UNIX machine.
 
In the first session, I connected to ORCL as an externally identified user at 14:10:46.
 
In the second session, I connected to ORCL as SYS at 14:12:07 and issued a shutdown command. There are several ways to do this e.g. shutdown normal, shutdown transactional, shutdown immediate, shutdown abort and shutdown timeout. The default, if none of these options are specified, is shutdown normal. This causes the shutdown command to wait until all other sessions have logged out and this is what happened here.
 
Once you have run a shutdown command, Oracle does not allow other users to connect to the database. In the third session below, I tried to connect to database ORCL as an externally identified user at 14:14:13. This connection failed with an ORA-01090. As usual, click on the images to enlarge them and bring them into focus:
   

The second screen print below shows sessions 1 and 2 above a little later. The externally identified user logged out of the database in session 1 at 14:18:51. This allowed the shutdown command in session 2 to complete:


 

Wednesday, November 28, 2012

Swapping


You can see if any swapping has taken place since a UNIX server was started by running vmstat –s. If you see values other than zero on the swap ins or swap outs lines, then swapping has occurred:

Solaris > vmstat -s
     4123 swap ins
      118 swap outs
    12369 pages swapped in
    35254 pages swapped out
93593074720 total address trans. faults taken
16175338065 page ins
67075526 page outs
167117261315 pages paged in
153673896 pages paged out
157281194355 total reclaims
157268182842 reclaims from free list
        0 micro (hat) faults
93593074720 minor (as) faults
12433082155 major faults
22790008329 copy-on-write faults
30659757855 zero fill page faults
228888590 pages examined by the clock daemon
      112 revolutions of the clock hand
146623357 pages freed by the clock daemon
251218086 forks
  8106297 vforks
197058709 execs
994409866209 cpu context switches
584512421794 device interrupts
135390836607 traps
2614749602197 system calls
328390806433 total name lookups (cache hits 99%)
7844654930 user   cpu
8057496148 system cpu
25835471029 idle   cpu
        0 wait   cpu
Solaris >

Tuesday, November 27, 2012

How to See the Version Number of an Oracle Client

If you want to find out the version number of an Oracle client from Windows, open a Command Prompt, type tnsping and press Enter / Return. No parameters are required. You will see the Oracle version number (10.2.0.4.0 in this case) and whether it is 32-bit or 64-bit. As usual, click on the image to enlarge it and bring it into focus:

Another option, suggested by Laurent Schneider in the first comment at the end, is to use sqlplus -v:

Tru64 > sqlplus -v

SQL*Plus: Release 9.2.0.7.0 - Production

Tru64 >

H:\>sqlplus -v

SQL*Plus: Release 10.2.0.4.0 - Production

H:\> 

Solaris > sqlplus -v

SQL*Plus: Release 11.2.0.2.0 Production 

Solaris >

Sunday, November 25, 2012

%NOTFOUND

This simple example was tested on Oracle 11.2. It uses SQL*Plus to list the tables owned by a user and counts the rows in each one:

SQL> select table_name
  2  from dba_tables
  3  where owner = 'ANDREW'
  4  /
 
TABLE_NAME
------------------------------
T1
T2
T3
 
SQL> select count(*) from andrew.t1
  2  /
 
  COUNT(*)
----------
      3082
 
SQL> select count(*) from andrew.t2
  2  /
 
  COUNT(*)
----------
      4851
 
SQL> select count(*) from andrew.t3
  2  /
 
  COUNT(*)
----------
     73942
 
SQL>

Then it does the same thing in PL/SQL. Note the use of %NOTFOUND to jump out of the loop when there is nothing more to return from the cursor:

SQL> DECLARE
  2   CURSOR c1 is
  3    SELECT owner||'.'||table_name fqn
  4    FROM dba_tables
  5    WHERE owner = 'ANDREW';
  6   c1_rec c1%ROWTYPE;
  7   row_count NUMBER;
  8   sql_statement VARCHAR2(200);
  9  BEGIN
10   OPEN c1;
11    LOOP
12     FETCH c1 INTO c1_rec;
13     EXIT WHEN c1%NOTFOUND;
14     sql_statement :=
15      'SELECT COUNT(*) FROM '||c1_rec.fqn;
16     DBMS_OUTPUT.PUT_LINE(sql_statement);
17     EXECUTE IMMEDIATE sql_statement
18      INTO row_count;
19     DBMS_OUTPUT.PUT_LINE(c1_rec.fqn||
20      ' has '||row_count||' rows');
21    END LOOP;
22   CLOSE c1;
23  END;
24  /
SELECT COUNT(*) FROM ANDREW.T1
ANDREW.T1 has 3082 rows
SELECT COUNT(*) FROM ANDREW.T2
ANDREW.T2 has 4851 rows
SELECT COUNT(*) FROM ANDREW.T3
ANDREW.T3 has 73942 rows
 
PL/SQL procedure successfully completed.
 
SQL>

ORA-01998

This was tested on Oracle 11.2.0.2.7. I wondered what would happen if you tried to revoke SYSDBA or SYSOPER from SYS so I decided to try it out: 

SQL> revoke sysdba from sys;
revoke sysdba from sys
*
ERROR at line 1:
ORA-01998: REVOKE failed: user SYS always has SYSOPER
and SYSDBA

SQL> revoke sysoper from sys;
revoke sysoper from sys
*
ERROR at line 1:
ORA-01998: REVOKE failed: user SYS always has SYSOPER
and SYSDBA

SQL>

Wednesday, November 21, 2012

ORA-02393

This example, which was tested on Oracle 11.2.0.2.7, shows how you can limit the amount of CPU time used by an SQL statement. Before you start, you need to ensure that RESOURCE_LIMIT  is set to TRUE otherwise limits will not be enforced:

SQL> alter system set resource_limit = true
  2  /
 
System altered.
 
SQL>

Then you need to create a profile which will limit CPU per statement to 1 second:

SQL> create profile for_andrew
  2  limit cpu_per_call 100
  3  /
 
Profile created.
 
SQL>

Next you need to create a user who will be assigned this profile and connect to the database with it:

SQL> create user andrew identified by reid
  2  profile for_andrew
  3  /
 
User created.
 
SQL> grant create session,
  2        alter session,
  3        select any dictionary to andrew
  4  /
 
Grant succeeded.

SQL> conn andrew/reid
Connected.
SQL>

After this, you need to run a SQL statement which will use more than 1 second of CPU time. Before doing this, check the session's CPU usage so far:

SQL> select a.value/100
  2  from v$mystat a, v$sysstat b
  3  where a.statistic# = b.statistic#
  4  and b.name = 'CPU used by this session'
  5  /
 
A.VALUE/100
-----------
        .02
 
SQL>

... then execute the statement and wait for it to fail:

SQL> select count(*)
  2  from dba_tables a, dba_tables b
  3  /
from dba_tables a, dba_tables b
     *
ERROR at line 2:
ORA-02393: exceeded call limit on CPU usage
 
SQL>

Once the statement has failed, check that it has only increased the session's CPU usage by 1 second:

SQL> select a.value/100
  2  from v$mystat a, v$sysstat b
  3  where a.statistic# = b.statistic#
  4  and b.name = 'CPU used by this session'
  5  /
 
A.VALUE/100
-----------
       1.34
 
SQL>

Tuesday, November 20, 2012

ORA-02002

This was tested on Oracle 11.2.0.2.7. I wondered what would happen if you were auditing connections, SYS.AUD$ filled up then you tried to login again. First I started auditing: 

SQL> conn / as sysdba
Connected.
SQL> show user
USER is "SYS"
SQL> audit create session
  2  /
 
Audit succeeded.
 
SQL> select value from v$parameter
  2  where name = 'audit_trail'
  3  /
 
VALUE
-------------------------------------------------------
TRUE
 
SQL>

Then I checked that auditing was working: 

SQL> select count(*) from sys.aud$
  2  /
 
  COUNT(*)
----------
         0
 
SQL> conn andrew/reid
Connected.
SQL> show user
USER is "ANDREW"
SQL> conn / as sysdba
Connected.
SQL> show user
USER is "SYS"
SQL> select count(*) from sys.aud$
  2  /
 
  COUNT(*)
----------
         2
 
SQL>

Next I looked at the current and maximum possible size of SYS.AUD$

SQL> select count(*) from dba_extents
  2  where owner = 'SYS'
  3  and segment_name = 'AUD$'
  4  /
 
  COUNT(*)
----------
         1
 
SQL> select next_extent, max_extents
  2  from dba_tables
  3  where owner = 'SYS'
  4  and table_name = 'AUD$'
  5  /
 
NEXT_EXTENT MAX_EXTENTS
----------- -----------
    1048576  2147483645
 
SQL>
 
I tried to limit the maximum possible size of SYS.AUD$. I found that I could adjust the size of the next extent: 

SQL> alter table sys.aud$ storage (next 8k)
  2  /
 
Table altered.
 
SQL>

... but not the maximum number of extents: 

SQL> alter table sys.aud$ storage (maxextents 10)
  2  /
alter table sys.aud$ storage (maxextents 10)
*
ERROR at line 1:
ORA-25150: ALTERING of extent parameters not permitted
 
SQL>

So the only way to make SYS.AUD$ run out of space is to set a limit on the size of the tablespace it is in. To make this easier, I moved SYS.AUD$ to the USERS tablespace, which has 2 megabytes and autoextend off in this database: 

SQL> alter table sys.aud$ move tablespace users
  2  /
 
Table altered.
 
SQL>

Then I set up a loop in a UNIX shell script to connect to the database again and again. Each iteration connected as ANDREW first then as SYS. It started with a count of 1: 

$count is 1
 
SQL*Plus: Release 11.2.0.2.0 Production on Tue Oct 16 18:37:44 2012
 
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
 
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
 
SQL> USER is "ANDREW"
SQL> Connected.
SQL> USER is "SYS"
SQL>   2
AUDIT_ROWS
----------
         4
 
SQL>   2    3    4
LOGONS_CUMULATIVE
-----------------
             4505
 
SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Connection as user ANDREW failed with ORA-02002 when $count reached 1986: 

$count is 1986
 
SQL*Plus: Release 11.2.0.2.0 Production on Tue Oct 16 18:51:36 2012
 
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
 
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
 
SQL> USER is "ANDREW"
SQL> ERROR:
ORA-02002: error while writing to audit trail
ORA-00604: error occurred at recursive SQL level 1
ORA-01653: unable to extend table SYS.AUD$ by 8 in tablespace USERS
 
Connected.
SQL> USER is "SYS"
SQL>   2
AUDIT_ROWS
----------
      3973
 
SQL>   2    3    4
LOGONS_CUMULATIVE
-----------------
             8484
 
SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

I was still able to connect as SYS so, if I had wanted to continue using the database I could have deleted rows from SYS.AUD$, enlarged the USERS tablespace or simply stopped auditing. 

Thursday, November 08, 2012

How I Fixed a Long Running Delete Statement in Oracle

I was asked to look at a long running DELETE statement recently. Once I had solved the problem, I decided to record it for future reference. I set up data to reproduce the problem in an Oracle 11.2.0.2.7 test database and ran it as follows:

SQL> conn andrew/reid
Connected.
SQL> select to_char(sysdate,'DD-MON-YY HH24:MI:SS')
  2  start_time from dual
  3  /

START_TIME
------------------
25-OCT-12 18:38:00

SQL> alter session set sql_trace = true
  2  /

Session altered.

SQL> delete t1
  2  /

1000000 rows deleted.

SQL> alter session set sql_trace = false
  2  /

Session altered.

SQL> select to_char(sysdate,'DD-MON-YY HH24:MI:SS')
  2  finish_time from dual
  3  /

FINISH_TIME
------------------
26-OCT-12 07:37:07

SQL>
 
Deleting 1 million rows would not normally take almost 13 hours so there was something else going on. I ran the trace file through tkprof with sys=no and saw the following in the output file:
 
********************************************************************************
 
SQL ID: 9mk1dmrqf9dv8 Plan Hash: 784337829
 
delete t1
 
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.00          0          3          0           0
Execute      1    604.16     795.65       3645       1624    6062716     1000000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2    604.17     795.66       3645       1627    6062716     1000000
 
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 126  (ANDREW)
Number of plan statistics captured: 1
 
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  DELETE  T1 (cr=1561304066 pr=3714 pw=0 time=3797440855 us)
   1000000    1000000    1000000   TABLE ACCESS FULL T1 (cr=1562 pr=1467 pw=0 time=18489606 us cost=461 size=13097435 card=1007495)
 
Rows     Execution Plan
-------  ---------------------------------------------------
      0  DELETE STATEMENT   MODE: ALL_ROWS
      0   DELETE OF 'T1'
1000000    TABLE ACCESS   MODE: ANALYZED (FULL) OF 'T1' (TABLE)
 
********************************************************************************
 
According to this, the deletion had only used 10 minutes of CPU time, which did not explain the 13 hour run time. At the end of the file I saw this, which gave me a clue:
 
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
 
call      count       cpu    elapsed       disk      query    current        rows
-------  ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       120      0.00       0.01          0          0          6           0
Execute 1000129  42414.03   45950.87         60 1561302066    1038509     1000000
Fetch       270      0.01       0.58          9        478          0         147
-------  ------  -------- ---------- ---------- ---------- ----------  ----------
total   1000519  42414.04   45951.47         69 1561302544    1038515     1000147
 
Recursive statements are commands run automatically by Oracle so I ran the tkprof again, this time without the sys=no. An extra entry appeared in the output file and it accounted for the excessive run time:
 
********************************************************************************
 
SQL ID: dm6vj6u32r27r Plan Hash: 1813935393
 
delete from "ANDREW"."T2"
where
"COL1" = :1
 
call      count       cpu    elapsed       disk      query    current        rows
-------  ------  -------- ---------- ---------- ---------- ----------  ----------
Parse         1      0.00       0.00          0          0          0           0
Execute 1000000  42413.99   45950.84         60 1561302066    1038509     1000000
Fetch         0      0.00       0.00          0          0          0           0
-------  ------  -------- ---------- ---------- ---------- ----------  ----------
total   1000001  42413.99   45950.84         60 1561302066    1038509     1000000
 
Misses in library cache during parse: 1
Misses in library cache during execute: 2
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1
 
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  DELETE  T2 (cr=1562 pr=1 pw=0 time=34707 us)
         0          0          0   TABLE ACCESS FULL T2 (cr=1562 pr=1 pw=0 time=34700 us)
 
********************************************************************************
 
I had not run this statement so something was causing Oracle to execute it automatically one million times. I thought it might be some sort of referential integrity checking so I looked for indexes on table T1:
 
SQL> l
  1  select index_name, column_name, column_position
  2  from dba_ind_columns
  3  where table_owner = 'ANDREW'
  4* and table_name = 'T1'
SQL> /
 
INDEX_NAME      COLUMN_NAME     COLUMN_POSITION
--------------- --------------- ---------------
PK1             COL1                          1
 
SQL>
 
Then I checked to see if any tables were referring to index PK1:
 
SQL> l
  1  select owner, table_name
  2  from dba_constraints
  3* where r_constraint_name = 'PK1'
SQL> /
 
OWNER           TABLE_NAME
--------------- ---------------
ANDREW          T2
 
SQL>
 
... and looked up the DDL used to create this table:
 
SQL> l
  1* select dbms_metadata.get_ddl('TABLE','T2','ANDREW') from dual
SQL> /
 
  CREATE TABLE "ANDREW"."T2"
   (    "COL1" NUMBER,
         CONSTRAINT "CHECK_NUMBER1" FOREIGN KEY ("COL1")
          REFERENCES "ANDREW"."T1" ("COL1") ON DELETE CASCADE ENABLE
   ) SEGMENT CREATION IMMEDIATE
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS" ;
 
SQL>
 
This showed referential integrity checking between tables T1 and T2. Each time a row was deleted from table T1, the corresponding row(s) had to be removed from table T2. This was the reason for the 1 million executions of  delete from "ANDREW"."T2" where "COL1" = :1 in the tkprof output above.
 
I looked for indexes on table T2 but there were none:
 
SQL> l
  1  select index_name
  2  from dba_indexes
  3  where owner = 'ANDREW'
  4* and table_name = 'T2'
SQL> /
 
no rows selected
 
SQL>
 
This explained the full table scans when rows were deleted from table T2. I created an index on table T2 and reran the test. This time the deletion took a little over 3 minutes:
 
SQL> create index t2_index on t2(col1)
  2  /
 
Index created.
 
SQL> select to_char(sysdate,'DD-MON-YY HH24:MI:SS')
  2  start_time from dual
  3  /
 
START_TIME
------------------
29-OCT-12 12:14:26
 
SQL> delete t1
  2  /
 
1000000 rows deleted.
 
SQL> select to_char(sysdate,'DD-MON-YY HH24:MI:SS')
  2  finish_time from dual
  3  /
 
FINISH_TIME
------------------
29-OCT-12 12:17:45
 
SQL>