samedi 31 octobre 2009

Oracle rename data file

Oracle is the worlds most flexible database and we have many methods for renaming tablespaces and the underlying data files.

Tablespace data file rename


We can use the alter tablespace rename datafile command, but the tablespace most be offline and you must re-name the data file while the tablespace is offline:
sqlplus:  ALTER TABLESPACE cust_ts OFFLINE;

linux> mv 'OLDFILE.DBF ' 'NEWFILE.DBF'

sqlplus:  ALTER TABLESPACE
cust_ts
RENAME datafile
'/u01/app/oracle/mysid/oldname.dbf'
TO
'/u01/app/oracle/mysid/newname.dbf'

Database data file rename


We can also use the alter database rename datafile command, but the data file must be renamed in the OS (using the mv linux command) while the database is down and the rename data file must be done while the database is un-opened (in the mount stage):
sqlplus:  shutdown;

linux> mv 'OLDFILE.DBF ' 'NEWFILE.DBF'

sqlplus:  startup mount;

sqlplus: ALTER DATABASE

RENAME file
'/u01/app/oracle/mysid/oldname.dbf'
TO
'/u01/app/oracle/mysid/newname.dbf'

RMAN data file rename


In RMAN, you can rename a data file like this:

  • SET NEWNAME – This command can be used to rename the data files to be restored to a new location. It is equivalent to the db_file_name_convert parameter of the server initialization parameter file. Combination of the SET NEWNAME and SWITCH command is the equivalent of the ALTER DATABASE RENAME FILE statement.

  • CONFIGURE AUXNAME – CONFIGURE AUXNAME is equivalent to the SET NEWNAME command, except that the CONFIGURE AUXNAME is persistent, whereas, the SET NEWNAME command must be used every time the DBA wants to rename a data file. It is necessary to connect to the recovery catalog in order to use the CONFIGURE AUXNAME command.

mercredi 28 octobre 2009

Tablespace Information

Here are some scripts related to Tablespace Information .

Information


TABLESPACE INFORMATION NOTES:

  • Tablespace Name - Name of the tablespace

  • Initial Extent - Default initial extent size

  • Next Extent - Default incremental extent size

  • Min Extents - Default minimum number of extents

  • Max Extents - Default maximum number of extents

  • PCT Increase - Default percent increase for extent size

  • Status - Tablespace status: ONLINE, OFFLINE, or INVALID (tablespace has been dropped)

  • Contents - Type of tablespace. This column will have 'TEMPORARY' (v7.3+) for dedicated temporary tablespaces, and 'PERMANENT' for tablespaces that can store both temporary sort segments and permanent objects.

    select	TABLESPACE_NAME,
    INITIAL_EXTENT,
    NEXT_EXTENT,
    MIN_EXTENTS,
    MAX_EXTENTS,
    PCT_INCREASE,
    STATUS,
    CONTENTS
    from dba_tablespaces
    order by TABLESPACE_NAME


    Coalesced Exts


    WAIT STATISTIC NOTES:

  • Tablespace Name - Name of tablespace

  • Total Extents - Total number of free extents in tablespace

  • Extents Coalesced - Total number of coalesced free extents in tablespace

  • % Extents Coalesced - Percentage of coalesced free extents in tablespace

  • Total Bytes - Total number of free bytes in tablespace

  • Bytes Coalesced - Total number of coalesced free bytes in tablespace

  • Total Blocks - Total number of free oracle blocks in tablespace

  • Blocks Coalesced - Total number of coalesced free Oracle blocks in tablespace

  • % Blocks Coalesced - Percentage of coalesced free Oracle blocks in tablespace

    select	TABLESPACE_NAME,
    TOTAL_EXTENTS,
    EXTENTS_COALESCED,
    PERCENT_EXTENTS_COALESCED,
    TOTAL_BYTES,
    BYTES_COALESCED,
    TOTAL_BLOCKS,
    BLOCKS_COALESCED,
    PERCENT_BLOCKS_COALESCED
    from dba_free_space_coalesced
    order by TABLESPACE_NAME


    Usage


    TABLESPACE USAGE NOTES:

    1. Tablespace Name - Name of the tablespace

    2. Bytes Used - Size of the file in bytes

    3. Bytes Free - Size of free space in bytes

    4. Largest - Largest free space in bytes

    5. Percent Used - Percentage of tablespace that is being used - Careful if it is more than 85%



    select	a.TABLESPACE_NAME,
    a.BYTES bytes_used,
    b.BYTES bytes_free,
    b.largest,
    round(((a.BYTES-b.BYTES)/a.BYTES)*100,2) percent_used
    from
    (
    select TABLESPACE_NAME,
    sum(BYTES) BYTES
    from dba_data_files
    group by TABLESPACE_NAME
    )
    a,
    (
    select TABLESPACE_NAME,
    sum(BYTES) BYTES ,
    max(BYTES) largest
    from dba_free_space
    group by TABLESPACE_NAME
    )
    b
    where a.TABLESPACE_NAME=b.TABLESPACE_NAME
    order by ((a.BYTES-b.BYTES)/a.BYTES) desc


    Users Default (SYSTEM)


    SYSTEM TABLESPACE USAGE NOTES:

  • Username - Name of the user

  • Created - User creation date

  • Profile - Name of resource profile assigned to the user

  • Default Tablespace - Default tablespace for data objects

  • Temporary Tablespace - Default tablespace for temporary objects

  • Only SYS, SYSTEM and possibly DBSNMP should have their default tablespace set to SYSTEM.

    select 	USERNAME,
    CREATED,
    PROFILE,
    DEFAULT_TABLESPACE,
    TEMPORARY_TABLESPACE
    from dba_users
    order by USERNAME


    Objects in SYSTEM TS


    OBJECTS IN SYSTEM TABLESPACE NOTES:

  • Owner - Owner of the object

  • Object Name - Name of object

  • Object Type - Type of object

  • Tablespace - Tablespace name

  • Size - Size (bytes) of object

  • Any user (other than SYS, SYSTEM) should have their objects moved out of the SYSTEM tablespace

    select	OWNER,
    SEGMENT_NAME,
    SEGMENT_TYPE,
    TABLESPACE_NAME,
    BYTES
    from dba_segments
    where TABLESPACE_NAME = 'SYSTEM'
    and OWNER not in ('SYS','SYSTEM')
    order by OWNER, SEGMENT_NAME


    Freespace/Largest Ext


    FREE, LARGEST, & INITIAL NOTES:

  • Tablespace - Name of the tablespace

  • Total Free Space - Total amount (bytes) of freespace in the tablespace

  • Largest Free Extent - Largest free extent (bytes) in the tablespace

    select 	TABLESPACE_NAME,
    sum(BYTES) Total_free_space,
    max(BYTES) largest_free_extent
    from dba_free_space
    group by TABLESPACE_NAME



  • Hit/Miss Ratios

    Here are some scripts related to Hit/Miss Ratios .

    Buffer Hit Ratio


    BUFFER HIT RATIO NOTES:

  • Consistent Gets - The number of accesses made to the block buffer to retrieve data in a consistent mode.

  • DB Blk Gets - The number of blocks accessed via single block gets (i.e. not through the consistent get mechanism).

  • Physical Reads - The cumulative number of blocks read from disk.

  • Logical reads are the sum of consistent gets and db block gets.

  • The db block gets statistic value is incremented when a block is read for update and when segment header blocks are accessed.

  • Hit Ratio should be > 80%, else increase DB_BLOCK_BUFFERS in init.ora

    select 	sum(decode(NAME, 'consistent gets',VALUE, 0)) "Consistent Gets",
    sum(decode(NAME, 'db block gets',VALUE, 0)) "DB Block Gets",
    sum(decode(NAME, 'physical reads',VALUE, 0)) "Physical Reads",
    round((sum(decode(name, 'consistent gets',value, 0)) +
    sum(decode(name, 'db block gets',value, 0)) -
    sum(decode(name, 'physical reads',value, 0))) /
    (sum(decode(name, 'consistent gets',value, 0)) +
    sum(decode(name, 'db block gets',value, 0))) * 100,2) "Hit Ratio"
    from v$sysstat


    Data Dict Hit Ratio


    DATA DICTIONARY HIT RATIO NOTES:

  • Gets - Total number of requests for information on the data object.

  • Cache Misses - Number of data requests resulting in cache misses

  • Hit Ratio should be > 90%, else increase SHARED_POOL_SIZE in init.ora

    select 	sum(GETS),
    sum(GETMISSES),
    round((1 - (sum(GETMISSES) / sum(GETS))) * 100,2)
    from v$rowcache


    SQL Cache Hit Ratio


    SQL CACHE HIT RATIO NOTES:

  • Pins - The number of times a pin was requested for objects of this namespace.

  • Reloads - Any pin of an object that is not the first pin performed since the object handle was created, and which requires loading the object from disk.

  • Hit Ratio should be > 85%

    select 	sum(PINS) Pins,
    sum(RELOADS) Reloads,
    round((sum(PINS) - sum(RELOADS)) / sum(PINS) * 100,2) Hit_Ratio
    from v$librarycache


    Library Cache Miss Ratio


    LIBRARY CACHE MISS RATIO NOTES:

  • Executions - The number of times a pin was requested for objects of this namespace.

  • Cache Misses - Any pin of an object that is not the first pin performed since the object handle was created, and which requires loading the object from disk.

  • Hit Ratio should be < 1%, else increase SHARED_POOL_SIZE in init.ora

    select 	sum(PINS) Executions,
    sum(RELOADS) cache_misses,
    sum(RELOADS) / sum(PINS) miss_ratio
    from v$librarycache



  • mardi 27 octobre 2009

    Using Oracle's recycle bin

    One of the many new features that Oracle 10g introduced is the recyclebin. When enabled, this feature works a little bit like the familiar Windows recycle bin or Mac Trash. Dropped tables go "into" the recyclebin, and can be restored from the recyclebin. OraFAQ has already published an article covering the basics; in this article, I'll cover some of the more subtle aspects of the recyclebin.

    THE BASICS


    First, a quick review of the basics. There are two recyclebin views: USER_RECYCLEBIN and DBA_RECYCLEBIN. For convenience, the synonym RECYCLEBIN points to your USER_RECYCLEBIN. The recyclebin is enabled by default in 10g, but you can turn it on or off with the RECYCLEBIN initialization parameter, at the system or session level.

    When the recyclebin is enabled, any tables that you drop do not actually get deleted. Instead, when you drop a table, Oracle just renames the table and all its associated objects (indexes, triggers, LOB segments, etc) to a system-generated name that begins with BIN$.

    For example, consider this simple table:
    SQL> create table tst (col varchar2(10), row_chng_dt date);

    Table created.

    SQL> insert into tst values ('Version1', sysdate);

    1 row created.

    SQL> select * from tst ;

    COL ROW_CHNG
    ---------- --------
    Version1 16:10:03

    If the RECYCLEBIN initialization parameter is set to ON (the default in 10g), then dropping this table will place it in the recyclebin:
    SQL> drop table tst;

    Table dropped.

    SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR", droptime
    2 from recyclebin
    SQL> /

    OBJECT_NAME ORIGINAL_NAME TYPE UND PUR DROPTIME
    ------------------------------ ------------- ----- --- -------------------
    BIN$HGnc55/7rRPgQPeM/qQoRw==$0 TST TABLE YES YES 2006-09-01:16:10:12

    All that happened to the table when we dropped it was that it got renamed. The table data is still there and can be queried just like a normal table:
    SQL> alter session set nls_date_format='HH24:MI:SS' ;

    Session altered.

    SQL> select * from "BIN$HGnc55/7rRPgQPeM/qQoRw==$0" ;

    COL ROW_CHNG
    ---------- --------
    Version1 16:10:03

    Since the table data is still there, it's very easy to "undrop" the table. This operation is known as a "flashback drop". The command is FLASHBACK TABLE... TO BEFORE DROP, and it simply renames the BIN$... table to its original name:
    SQL> flashback table tst to before drop;

    Flashback complete.

    SQL> select * from tst ;

    COL ROW_CHNG
    ---------- --------
    Version1 16:10:03

    SQL> select * from recyclebin ;

    no rows selected

    It's important to know that after you've dropped a table, it has only been renamed; the table segments are still sitting there in your tablespace, unchanged, taking up space. This space still counts against your user tablespace quotas, as well as filling up the tablespace. It will not be reclaimed until you get the table out of the recyclebin. You can remove an object from the recyclebin by restoring it, or by purging it from the recyclebin.
    SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR", droptime
    2 from recyclebin
    SQL> /

    OBJECT_NAME ORIGINAL_NAME TYPE UND PUR DROPTIME
    ------------------------------ ------------- ------------------------- --- --- -------------------
    BIN$HGnc55/7rRPgQPeM/qQoRw==$0 TST TABLE YES YES 2006-09-01:16:10:12

    SQL> purge table "BIN$HGnc55/7rRPgQPeM/qQoRw==$0" ;

    Table purged.

    SQL> select * from recyclebin ;

    no rows selected

    You have several purge options. You can also purge everything from the USER_RECYCLEBIN using PURGE RECYCLEBIN; a user with DBA privileges can purge everything from all recyclebins using DBA_RECYCLEBIN; and finally, you can purge recyclebin objects by schema and user with PURGE TABLESPACE USER .

    Unless you purge them, Oracle will leave objects in the recyclebin until the tablespace runs out of space, or until you hit your user quota on the tablespace. At that point, Oracle purges the objects one at a time, starting with the ones dropped the longest time ago, until there is enough space for the current operation. If the tablespace data files are AUTOEXTEND ON, Oracle will purge recyclebin objects before it autoextends a datafile.

    DROPPED TABLE VERSIONS


    Just as you can wind up with several versions of a file with the same name in the Windows recycle bin, you can wind up with several versions of a table in the Oracle recyclebin. For example, if we create and drop the TST table twice, we'll have two versions in the recyclebin:
    SQL> create table tst (col varchar2(10), row_chng_dt date);

    Table created.

    SQL> insert into tst values ('Version1', sysdate);

    1 row created.

    SQL> drop table tst;

    Table dropped.

    SQL> create table tst (col varchar2(10), row_chng_dt date);

    Table created.

    SQL> insert into tst values ('Version2', sysdate);

    1 row created.

    SQL> drop table tst;

    Table dropped.

    SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR", droptime
    2 from recyclebin;

    OBJECT_NAME ORIGINAL_NAME TYPE UND PUR DROPTIME
    ------------------------------ ------------- ----- --- --- -------------------
    BIN$HGnc55/7rRPgQPeM/qQoRw==$0 TST TABLE YES YES 2006-09-01:16:10:12
    BIN$HGnc55/8rRPgQPeM/qQoRw==$0 TST TABLE YES YES 2006-09-01:16:19:53

    Query the two dropped tables to verify that they are different:
    SQL> select * from "BIN$HGnc55/7rRPgQPeM/qQoRw==$0";

    COL ROW_CHNG
    ---------- --------
    Version1 16:10:03

    SQL> select * from "BIN$HGnc55/8rRPgQPeM/qQoRw==$0" ;

    COL ROW_CHNG
    ---------- --------
    Version2 16:19:45

    If we issue a FLASHBACK DROP command for TST, which version will Oracle restore?
    SQL> flashback table tst to before drop;

    Flashback complete.

    SQL> select * from tst;

    COL ROW_CHNG
    ---------- --------
    Version2 16:19:45

    Oracle always restores the most recent version of the dropped object. To restore the earlier version of the table, instead of the later one, we can either keep flashing back until we hit the version we want, or we can simply refer to the correct version of the table by using its new BIN$... name. For example, dropping TST once more gives us two versions in the recyclebin again:
    SQL> drop table tst;

    Table dropped.

    SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR", droptime
    2 from recyclebin;

    OBJECT_NAME ORIGINAL_NAME TYPE UND PUR DROPTIME
    ------------------------------ ------------- ------ --- --- -------------------
    BIN$HGnc55/7rRPgQPeM/qQoRw==$0 TST TABLE YES YES 2006-09-01:16:10:12
    BIN$HGnc55/9rRPgQPeM/qQoRw==$0 TST TABLE YES YES 2006-09-01:16:21:00

    To flashback to the first version, refer to the BIN$... name of the first version of TST:
    SQL> flashback table "BIN$HGnc55/7rRPgQPeM/qQoRw==$0" to before drop;

    Flashback complete.

    SQL> select * from tst;

    COL ROW_CHNG
    ---------- --------
    Version1 16:10:03

    The second version is still hanging out in the recyclebin:
    SQL> select object_name, original_name, operation, can_undrop as "UND", can_purge as "PUR", droptime
    2 from recyclebin;

    OBJECT_NAME ORIGINAL_NAME OPERATION UND PUR DROPTIME
    ------------------------------ -------------- --------- --- --- -------------------
    BIN$HGnc55/9rRPgQPeM/qQoRw==$0 TST DROP YES YES 2006-09-01:16:21:00

    DEPENDENT OBJECTS


    In a modern relational database, few tables stand alone. Most will have indexes, constraints, and/or triggers. Dropping a table also drops these dependent objects. When you drop a table with the recyclebin enabled, the table and its dependent objects get renamed, but still have the same structure as before. The triggers and indexes get modified to point to the new BIN$ table name. (Any stored procedures that referenced the original object, though, are invalidated.) For example:
    SQL> truncate table tst;

    Table truncated.

    SQL> insert into tst values ('Version3', sysdate);

    1 row created.

    SQL> create index ind_tst_col on tst(col);

    Index created.

    SQL> select * from tst;

    COL ROW_CHNG
    ---------- --------
    Version3 16:26:10

    SQL> drop table tst ;

    Table dropped.

    SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR", droptime
    2 from recyclebin
    3 order by droptime;

    OBJECT_NAME ORIGINAL_NAME TYPE UND PUR DROPTIME
    ------------------------------ -------------- ------ --- --- -------------------
    BIN$HGnc55/9rRPgQPeM/qQoRw==$0 TST TABLE YES YES 2006-09-01:16:21:00
    BIN$HGnc55//rRPgQPeM/qQoRw==$0 TST TABLE YES YES 2006-09-01:16:27:36
    BIN$HGnc55/+rRPgQPeM/qQoRw==$0 IND_TST_COL INDEX NO YES 2006-09-01:16:27:36

    The RECYCLEBIN views have a few other columns that make the relationship between TST and IND_TST_COL clear:
    SQL> select object_name, original_name, type, can_undrop as "UND", 
    2 can_purge as "PUR", droptime, base_object, purge_object
    3 from recyclebin
    4 order by droptime;

    OBJECT_NAME ORIGINAL_NAME TYPE UND PUR DROPTIME BASE_OBJECT PURGE_OBJECT
    ------------------------------ --------------- ----- --- --- ------------------- ----------- ------------
    BIN$HGnc55/9rRPgQPeM/qQoRw==$0 TST TABLE YES YES 2006-09-01:16:21:00 233032 233032
    BIN$HGnc55//rRPgQPeM/qQoRw==$0 TST TABLE YES YES 2006-09-01:16:27:36 233031 233031
    BIN$HGnc55/+rRPgQPeM/qQoRw==$0 IND_TST_COL INDEX NO YES 2006-09-01:16:27:36 233031 233434

    The PURGE_OBJECT column is the object number of the object itself; eg. the object number of IND_TST_COL is 233434. Note the value of the BASE_OBJECT column for IND_TST_COL: 233031, the object number of the associated version of the TST table.

    If we FLASHBACK DROP the TST table, its index will be restored - but Oracle will not rename it to its original name. It will retain its BIN$.. name:
    SQL> flashback table tst to before drop;

    Flashback complete.

    SQL> select * from tst ;

    COL ROW_CHNG
    ---------- --------
    Version3 16:26:10

    SQL> select index_name from user_indexes where table_name='TST' ;

    INDEX_NAME
    ------------------------------
    BIN$HGnc55/+rRPgQPeM/qQoRw==$0

    I'm not sure why Oracle bothers storing the index's original name, since it doesn't seem to be used for anything. If we now drop this copy of the TST table, Oracle doesn't "remember" that the original name of the index "BIN$HGnc55/+rRPgQPeM/qQoRw==$0"was IND_TST_COL - the ORIGINAL_NAME column in RECYCLEBIN holds the ugly string "BIN$HGnc55/+rRPgQPeM/qQoRw==$0" :
    SQL> drop table tst;

    Table dropped.

    SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR",
    2 droptime, base_object, purge_object
    3 from recyclebin
    4 order by droptime;

    OBJECT_NAME ORIGINAL_NAME TYPE UND PUR DROPTIME BASE_OBJECT PURGE_OBJECT
    ------------------------------ --------------- ----- --- --- ------------------- ----------- ------------
    BIN$HGnc55/9rRPgQPeM/qQoRw==$0 TST TABLE YES YES 2006-09-01:16:21:00 233032 233032
    BIN$HGnc56ABrRPgQPeM/qQoRw==$0 TST TABLE YES YES 2006-09-01:16:31:43 233031 233031
    BIN$HGnc56AArRPgQPeM/qQoRw==$1 BIN$HGnc55/+rRP INDEX NO YES 2006-09-01:16:31:43 233031 233434
    gQPeM/qQoRw==$0

    Note the values in the CAN_UNDROP and CAN_PURGE columns for the index (displayed as "UND" and "PUR" above). An index cannot be undropped without the table - so CAN_UNDROP is set to NO. It can, however, be purged without purging the table:
    SQL> purge index "BIN$HGnc56AArRPgQPeM/qQoRw==$1" ;

    Index purged.

    SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR",
    2 droptime, base_object, purge_object
    3 from recyclebin
    4 order by droptime;

    OBJECT_NAME ORIGINAL_NAME TYPE UND PUR DROPTIME BASE_OBJECT PURGE_OBJECT
    ------------------------------ -------------- ----- --- --- ------------------- ----------- ------------
    BIN$HGnc55/9rRPgQPeM/qQoRw==$0 TST TABLE YES YES 2006-09-01:16:21:00 233032 233032
    BIN$HGnc56ABrRPgQPeM/qQoRw==$0 TST TABLE YES YES 2006-09-01:16:31:43 233031 233031

    Now, if we restore the table, it will be restored without the index:
    SQL> flashback table tst to before drop;

    Flashback complete.

    SQL> select * from tst ;

    COL ROW_CHNG
    ---------- --------
    Version3 16:26:10

    SQL> select index_name from user_indexes where table_name='TST' ;

    no rows selected

    If you drop a table with associated LOB segments, they are handled in a similar way, except that they cannot be independently purged: CAN_UNDROP and CAN_PURGE are set to NO, and they are purged if you purge the table from the recyclebin, restored with the table if you restore it.

    LIMITATIONS


    A few types of dependent objects are not handled like the simple index above.

    • Bitmap join indexes are not put in the recyclebin when their base table is DROPped, and not retrieved when the table is restored with FLASHBACK DROP.

    • The same goes for materialized view logs; when you drop a table, all mview logs defined on that table are permanently dropped, not put in the recyclebin.

    • Referential integrity constraints that reference another table are lost when the table is put in the recyclebin and then restored.


    If space limitations force Oracle to start purging objects from the recyclebin, it purges indexes first. If you FLASHBACK DROP a table whose associated indexes have already been purged, it will be restored without the indexes.

    DISABLING THE RECYCLEBIN


    In Windows, you can choose to permanently delete a file instead of sending it to the recycle bin. Similarly, you can choose to drop a table permanently, bypassing the Oracle recyclebin, by using the PURGE clause in your DROP TABLE statement.
    SQL> purge recyclebin;

    Recyclebin purged.

    SQL> select * from recyclebin;

    no rows selected

    SQL> create table my_new_table (dummy varchar2(1));

    Table created.

    SQL> drop table my_new_table purge;

    Table dropped.

    SQL> select * from recyclebin;

    no rows selected

    If you disable the recyclebin at the session level, with ALTER SESSION SET RECYCLEBIN=OFF, it has the same effect as putting PURGE at the end of all your drop statements. Note, however, that you can still use FLASHBACK DROP to restore objects that were put in the recyclebin before you set RECYCLEBIN=OFF. For example:
    SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR",
    2 droptime, base_object, purge_object
    3 from recyclebin
    4 order by droptime;

    OBJECT_NAME ORIGINAL_NAME TYPE UND PUR DROPTIME BASE_OBJECT PURGE_OBJECT
    ------------------------------ ------------- ----- --- --- ------------------- ----------- ------------
    BIN$HGnc56ACrRPgQPeM/qQoRw==$0 TST TABLE YES YES 2006-09-01:16:34:12 233031 233031

    SQL> alter session set recyclebin=off ;

    Session altered.

    SQL> create table tst (col varchar2(10), row_chng_dt date);

    Table created.

    SQL> insert into tst values ('Version5', sysdate);

    1 row created.

    SQL> drop table tst ;

    Table dropped.

    SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR",
    2 droptime, base_object, purge_object
    3 from recyclebin
    4 order by droptime;

    OBJECT_NAME ORIGINAL_NAME TYPE UND PUR DROPTIME BASE_OBJECT PURGE_OBJECT
    ------------------------------ -------------- ----- --- --- ------------------- ----------- ------------
    BIN$HGnc56ACrRPgQPeM/qQoRw==$0 TST TABLE YES YES 2006-09-01:16:34:12 233031 233031

    SQL> flashback table tst to before drop;

    Flashback complete.

    SQL> select * from tst ;

    COL ROW_CHNG
    ---------- --------
    Version3 16:26:10

    CONCLUSION


    This article has explored some of the subtler ramifications of the recyclebin. To sum up:

    -The recyclebin may contain several versions of a dropped object. Oracle restores them in LIFO order; you can restore older versions by repeatedly restoring until you get the version you want, or by using the correct version's BIN$... name directly.
    - Oracle drops most dependent objects along with the table, and restores them when the table is restored with FLASHBACK DROP, but does not restore their names. You can purge dependent objects separately to restore the table without them.
    - Even after turning RECYCLEBIN OFF, you can FLASHBACK DROP objects that were already in the RECYCLEBIN.

    10.2.0.4 Upgrade with DataGuard

    Upgrade was done from 10.2.0.3 to 10.2.0.4. Dataguard has physical standby configuration. Steps and commands may vary depending on your environment.

    ...... primary database commands


    ...... standby database commands



    1. On both the primary and standby host uncompress the downloaded patchset file into a new directory.


    2. On the active primary database instance, identify and record the current log thread and sequence number. Then, archive the current log:


    SQL> SELECT THREAD#, SEQUENCE# FROM V$LOG WHERE STATUS='CURRENT';


    SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;


    3. Shut down the existing Oracle Database instance on the primary host with normal or immediate priority. Stop all listeners, agents and other processes running against the ORACLE_HOME


    %lsnrctl stop


    % emctl stop dbconsole


    SQL> SHUTDOWN IMMEDIATE;


    4. On the active standby instance that is running Redo Apply, query the V$LOG_HISTORY view to verify that each log file archived in Step 2 has been received and applied to the standby database. For example:


    SQL> SELECT MAX(SEQUENCE#) FROM V$LOG_HISTORY;


    5. Once the last log has been applied stop Redo Apply cancel managed recovery on the standby database.



    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

    6. Shutdown the standby instance on the standby host. Stop all listeners, agents and other processes running against the ORACLE_HOME.


    % lsnrctl stop


    % emctl stop dbconsole


    SQL> SHUTDOWN IMMEDIATE;


    7. Use 'runInstaller' to install the patchset. Do this on both the primary and standby ORACLE_HOME.


    %./runInstaller


    %./runInstaller


    8. Once the patchset has been installed on on all hosts / nodes, startup the standby listener on the standby host first.



    % lsnrctl start

    9. Startup mount the standby database.

    % sqlplus / as sysdba

    SQL> startup mount


    10. Place the standby database in managed recovery mode.

    SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE parallel 8 DISCONNECT FROM SESSION;

    11. Startup the primary instance on the primary host.



    % sqlplus / as sysdba

    SQL> STARTUP UPGRADE

    12. Ensure that remote archiving to the standby database is functioning correctly by switching logfiles on the primary and verifying that v$archive_dest.status is valid.


    SQL> select dest_id, status from v$archive_dest;


    SQL> alter system archive log current;


    13. Check if the archived log is applied on the standby database


    SQL> SELECT MAX(SEQUENCE#) FROM V$LOG_HISTORY;


    14. Start the Database Upgrade Assistant and upgrade the primary database.


    % cd /oracle/app/oracle/product/10.2.0/bin


    % ./dbua



    There should be a line in the /etc/oratab file for the database that will be upgraded. Otherwise you cannot see the database when you open Database Upgrade Assistant.



    DBUA may give a warning about the invalid objects onthe database and you should note these invalid object before the upgrade operation. The script below can be used to see a list of invalid objects in the database.


    break on c1 skip 2


    set pages 999


    col c1 heading 'owner' format a15


    col c2 heading 'name' format a40


    col c3 heading 'type' format a10


    ttitle 'Invalid|Objects'


    select


    owner c1,


    object_type c3,


    object_name c2


    from


    dba_objects


    where


    status != 'VALID'


    order by


    owner,


    object_type


    ;


    15. Once DBUA completes make note of the current log sequence and archive the current log:


    SQL> SELECT THREAD#, SEQUENCE# FROM V$LOG WHERE STATUS='CURRENT';



    SQL> alter system archive log current;

    16. Restart the primary database:

    SQL> SHUTDOWN

    SQL> STARTUP

    17. Once all actions have been completed verify the standby database has been recovered to the last archive log produced by the primary. (Step 15)



    On the primary:

    SQL> select max(sequence#) from v$archived_log;

    On the standby:

    SQL> select max(sequence#) from v$log_history;

    18. Run the script above again to get the list of invalid objects after database upgrade.



    19. Check the success of upgrade with the following query


    SQL>select comp_name, status, version from dba_registry;




    20. Upgrade RMAN catalog


    $ rman catalog username/password@alias


    RMAN> UPGRADE CATALOG;




    Note: After executing the query in step 19, i saw that Oracle Database Packages and Types component has the status INVALID, here is a work around to solve this issue:

    SQL>sqlplus / as sysdba

    SQL>drop table plan_table;
    SQL>@?/rdbms/admin/utlxplan

    SQL>@?/rdbms/admin/prvtspao.plb

    SQL>@?/rdbms/admin/utlrp.sql

    http://forums.oracle.com/forums/thread.jspa?threadID=669838

    La technique du FlashBack en 10g

    Introduction


    En version 9i, oracle a introduit la notion de FlashBack query. En français, la possibilité d’interroger une table à un e époque révolue. Mécanisme bien sympathique pour prévenir les erreurs utilisateurs.

    En 10g, ce mécanisme a été profondément perfectionné et permet de nombreuses opérations permettant de revenir en arrière sur des erreurs et ce, sur 3 niveaux :
    - au niveau d’une ligne
    - au niveau d’une table
    - au niveau d’une base de données complète.

    Principe de fonctionnement


    Le principe est différent selon l’opération de flashback à réaliser.
    - Si l’opération se situe au niveau ligne ou table (hors DROP) alors ce sont les informations d’annulation (UNDO) qui seront utilisées. Il est donc capital de bien régler le paramètre UNDO_RETENTION et la taille du tablespace d’annulation pour pouvoir revenir assez loin en arrière. Si le terme d’annulation n’évoque rien pour vous, une petite lecture s’impose ici : les annulations en 9i et 10g.
    - Si l’opération concerne un flashback avant DROP d’une table. Alors l’opération utilise le fait qu’une table n’est pas réellement supprimée. Utilisation de la "Corbeille" oracle.
    - Si l’opération concerne un flashback de la base de données. Alors l’opération utilise une nouvelle forme de journaux appelés journaux de FlashBack. Ces journaux sont stockés dans la flashback Recovery Area (nouveau en 10g), et dispose de l’extension FLB. Ils sont générés périodiquement et contiennent la liste des blocs modifiés durant cette période. Ils sont généralement moins volumineux que des ARCHIVE Logs. Cela dit, si on désire, réaliser un Flashback au niveau de la base de données. Celle ci doit être en mode ARCHIVELOG. En effet, les journaux de FLASHBACK ne contiennent que les blocs modifiés. Lors d’une opération de FLASHBACK DATABASE, ces blocs seront remis en place à l’instant demandé. Cela dit, il se peut qu’il y ait un décalage entre le temps demandé et le temps d’enregistrement du blocs. Pour se garder de cette différence et donc pour maintenir la cohérence de la base, on aura besoin de jouer les ARCHIVELOG.

    Configuration


    Niveau Ligne et tables (hors DROP)


    Nous venons de voir que pour un flashback sur ligne ou sur table (hors DROP). Le tout se faisait dans le tablespace d’annulations. Il faut donc configurer les paramètres UNDO_RETENTION et la taille du tablespace d’annulations de la sorte.

    Plus on désire revenir longtemps en arrière par une telle opération et plus il faudra augmenter le paramètre UNDO_RETENTION et également la taille du tablespace. Il y a donc une décision à prendre sur ce sujet dans le design de la base.

    Niveau table après Suppression : DROP


    Si on désire réaliser des flashback de table après les avoir supprimées. Il faut activer la "Corbeille" Oracle. Cette opération se fait en activant le paramètre RECYCLEBIN. C’est un paramètre dynamique pouvant être modifié pour la session ou pour le système. Par défaut, il est actif. Si le paramètres est inactif (OFF), il sera impossible d’effectuer un FLASHBACK après suppression sur les tables supprimées.

    Ex :

    ALTER SYSTEM SET RECYCLEBIN = ON SCOPE=SPFILE;
    ALTER SESSION SET RECYCLEBIN = OFF;

    Niveau Base de données


    Pour pouvoir réaliser un FLASHBACK complet sur la base de données. Il faut :
    - Configurer la zone de flash ou Flash Recovery Area
    - Configurer le temps de retention pour effectuer une telle opération de Flashback
    - Activer le mode FLASHBACK pour la base de données.
    - Etre en mode ARCHIVELOG

    - Configuration de la Flash Recovery Area. Cette zone de récupération va nous permettre de stocker les journaux de FlashBack. Elle pourra également servir de stockage des ARCHIVE LOGS, des Backup Sets RMAN. Il font donc lui donner une localisation et une taille suffisante pour contenir tous ces objets.

    Cette zone se configure avec les paramètres : DB_RECOVERY_FILE_DEST : pour spécifier la localisation de la Flash Recovery Area. On peut lui spécifier une localisation physique ou un groupe de disques ASM. De plus, en environnement RAC (Real Application Cluster), toutes les instances doivent avoir la même déstination.

    DB_RECOVERY_FILE_DEST_SIZE : pour spécifier la taille de la Flash Recovery Area.

    Par exemple : Si on désire configurer une Flash Recovery Area dans /u02/orcl/flash de 5 GigaOctets :

    ALTER SYSTEM SET DB_RECOVERY_FILE_DEST = '/u02/orcl/flash' SCOPE=SPFILE;
    ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE = 5G SCOPE=SPFILE;

    Puis redémarrer l’instance.

    - Configuration du temps de rétention des informations de logs FlashBack.

    Nous avons vu, plus haut, que le flashback database était rendu possible par des journaux spécifiques stockés dans la Flash Recovery Area. Ces fichiers vont contenir les informations d’une certaine période. Cette période est définie par le paramètre : DB_FLASHBACK_RETENTION_TARGET. Ce paramètre correpond au temps (en minutes) de conservation de ces journaux.

    Si, par exemple, ce temps est réglé à 1440 (par défaut), alors il sera possible de revenir en arrière, au plus loin, une journée (24*60) avant l’opération.

    - Activation du mode FLASHBACK

    L’activation du mode FLASHBACK se réalise lorsque la base est en mode MOUNT. Je vous conseille donc de régler les paramètres précédents dans le SPFILE puis de redémarrer l’instance en lui donnant le bon mode de fonctionnement :

    SHUTDOWN IMMEDIATE;
    ...
    STARTUP MOUNT;
    ALTER DATABASE FLASHBACK ON;
    ALTER DATABASE OPEN;

    La vérification se fera dans la vue V$DATABASE (d’ou l’intérêt de configurer toutes les instances RAC à l’identique : c’est la base de données qui est activée dans le mode FLASHBACK) :

    SQL> select name,flashback_on from v$database;


    NAME      FLASHBACK_ON
    --------- ------------------
    ORCL      YES

    Retour en Arrière !


    Niveau Ligne


    La commande de fonctionnement est la suivante :

    SELECT colonne1[,colonne2,...,colonneN]
    FROM table AS OF {TIMESTAMP | SCN} expression;

    Mais plutôt que de grands discours, je vous propose un exemple :

    SQL>  connect / as sysdba
    Connected.
    SQL> grant execute on dbms_flashback to hr;


    Grant succeeded.

    SQL> connect hr/hr
    Connected.

    SQL> select * from regions where region_id=4;

    REGION_ID REGION_NAME
    ---------- -------------------------
    4 Middle East and Africa

    SQL> select to_char(SYSDATE,'DD/MM/YYYY HH24:MI:SS') as MAINTENANT,
    2         dbms_flashback.get_system_change_number as SCN_COURANT
    3  from dual;

    MAINTENANT          SCN_COURANT
    ------------------- -----------
    26/11/2005 16:11:39      700821

    SQL> -- On va prendre un café et on revient ;)
    SQL> select to_char(SYSDATE,'DD/MM/YYYY HH24:MI:SS') as MAINTENANT,
    2         dbms_flashback.get_system_change_number as SCN_COURANT
    3  from dual;

    MAINTENANT          SCN_COURANT
    ------------------- -----------
    26/11/2005 16:13:59      701157

    SQL> update regions set region_name='Groenland' where region_id=4;

    1 row updated.

    SQL> commit;

    Commit complete.

    SQL> -- On va reprendre un café
    SQL> -- histoire de passer un peu de temps ;)
    SQL> select to_char(SYSDATE,'DD/MM/YYYY HH24:MI:SS') as MAINTENANT,
    2         dbms_flashback.get_system_change_number as SCN_COURANT
    3  from dual;

    MAINTENANT          SCN_COURANT
    ------------------- -----------
    26/11/2005 16:18:48      702192

    SQL> -- Et maintenant, on revient en arriere.
    SQL> -- En jouant avec la date ...
    SQL> select *
    2  from regions AS OF TIMESTAMP SYSTIMESTAMP - INTERVAL '10' MINUTE
    3  where region_id=4;

    REGION_ID REGION_NAME
    ---------- -------------------------
    4 Middle East and Africa

    SQL> ... ou avec le SCN
    SQL> select *
    2  from regions AS OF SCN 700821
    3  where region_id=4;

    REGION_ID REGION_NAME
    ---------- -------------------------
    4 Middle East and Africa

    SQL> Pour remettre le tout en ordre, il suffit de réaliser un update
    SQL> SQL> update regions set region_name=
    2  (select region_name
    3   from regions AS OF SCN 700821
    4   where region_id=4)
    5  where region_id=4;

    1 row updated.

    SQL> commit;

    Commit complete.

    SQL> select * from regions where region_id=4;

    REGION_ID REGION_NAME
    ---------- -------------------------
    4 Middle East and Africa

    Niveau Table


    Données supprimées Pour revenir en arrière lorsqu’on a tout supprimé dans une table (hors structure). On réalise la séquence suivante :

    FLASHBACK TABLE Table
    AS OF {TIMESTAMP | SCN} expression
    [ENABLE TRIGGERS];

    Pour réaliser une telle opération il faut :
    - Avoir le droit FLASHBACK sur la table
    - Disposer des autorisation SELECT, INSERT et ALTER sur la table à restaurer
    - Activer le déplacement de lignes sur la table (ALTER TABLE table ENABLE ROW MOVEMENT) sous peine de ORA-08189.

    SQL> select count(*) from regions_bis;


    COUNT(*)
    ----------
    4

    SQL> delete from regions_bis;

    4 rows deleted.

    SQL> commit;

    Commit complete.

    SQL> -- Oups ....
    SQL> -- 10 minutes plus tard, j'envisage le FLASHBACK;
    SQL> ALTER TABLE regions_bis ENABLE ROW MOVEMENT;

    Table altered.

    SQL> FLASHBACK TABLE regions_bis TO TIMESTAMP SYSTIMESTAMP - INTERVAL '10' MINUTE;

    Flashback complete.

    SQL> select count(*) from regions_bis;

    COUNT(*)
    ----------
    4

    Table supprimée Nous avons vu plus haut, qu’en 10g, Oracle s’est doté d’une "Corbeille" ou "Recycle bin". Cette corbeille va nous permettre, si elle est activée, de récupérer une table ayant subit un DROP.

    En terme de fonctionnement, tout objet supprimé, est marqué comme supprimé et placé dans la vue DBA_RECYCLEBIN (au niveau DBA) ; le niveau USER sera représenté par la vue USER_RECYCLEBIN. Une fois ces objets marqués comme supprimés. Ils pourront être récupérés. Cela dit, si le système demande des extensions suite à un manque d’espace dans le tablespace. Les extensions des segments placés dans la corbeille seront recyclés dans de nouvelles extensions.

    La encore, je vais agrémenter d’un exemple, qui sera plus parlant que des longs discours.

    SQL> conn / as sysdba
    Connected.
    SQL> show parameter recyclebin


    NAME                  TYPE        VALUE
    --------------------- ----------- ------
    recyclebin            string      on

    SQL> conn hr/hr
    Connected.
    Name                    Null?    Type
    ----------------------- -------- -------------
    OBJECT_NAME             NOT NULL VARCHAR2(30)
    ORIGINAL_NAME                    VARCHAR2(32)
    OPERATION                        VARCHAR2(9)
    TYPE                             VARCHAR2(25)
    TS_NAME                          VARCHAR2(30)
    CREATETIME                       VARCHAR2(19)
    DROPTIME                         VARCHAR2(19)
    DROPSCN                          NUMBER
    PARTITION_NAME                   VARCHAR2(32)
    CAN_UNDROP                       VARCHAR2(3)
    CAN_PURGE                        VARCHAR2(3)
    RELATED                          NOT NULL NUMBER
    BASE_OBJECT                      NOT NULL NUMBER
    PURGE_OBJECT                     NOT NULL NUMBER
    SPACE                            NUMBER

    SQL> select object_name,original_name,type from user_recyclebin;

    no rows selected

    SQL> drop table regions_bis;

    Table dropped.

    SQL> select object_name,original_name,type from user_recyclebin;

    OBJECT_NAME                    ORIGINAL_NAME   TYPE
    ------------------------------ --------------- -----------------
    BIN$M958iDMLSwiRr1va7ePQ0A==$0 REGIONS_BIS     TABLE

    SQL> show recyclebin;
    ORIGINAL NAME    RECYCLEBIN NAME                OBJECT TYPE  DROP TIME
    ---------------- ------------------------------ ------------ -------------------
    REGIONS_BIS      BIN$M958iDMLSwiRr1va7ePQ0A==$0 TABLE        2005-11-27:16:11:53

    SQL> desc regions_bis;
    ERROR:
    ORA-04043: object regions_bis does not exist

    SQL> -- maintenant on restaure la table avant le DROP dans son nom original
    SQL> FLASHBACK TABLE regions_bis TO BEFORE DROP;

    Flashback complete.
    SQL> desc regions_bis;
    Name                  Null?    Type
    --------------------- -------- -------------
    REGION_ID             NOT NULL NUMBER
    REGION_NAME                    VARCHAR2(25)

    SQL> select object_name,original_name,type from user_recyclebin;

    no rows selected

    SQL> drop table regions_bis;

    Table dropped.

    SQL> -- On restaure la table et on en profite pour la renommer
    SQL> FLASHBACK TABLE regions_bis TO BEFORE DROP RENAME TO regions_bis_repetita;

    Flashback complete.

    SQL> desc regions_bis;
    ERROR:
    ORA-04043: object regions_bis does not exist

    SQL> desc regions_bis_repetita;
    Name                  Null?    Type
    --------------------- -------- ---------------------------
    REGION_ID             NOT NULL NUMBER
    REGION_NAME                    VARCHAR2(25)

    SQL> -- Maintenant on supprime définitivement la table regions_bis_repetita:
    SQL> drop table regions_bis_repetita;

    Table dropped.

    SQL> show recyclebin;
    ORIGINAL NAME    RECYCLEBIN NAME                OBJECT TYPE  DROP TIME
    ---------------- ------------------------------ ------------ -------------------
    REGIONS_BIS_REPE BIN$LwfJR8+sTp2bksjHK5b3AQ==$0 TABLE        2005-11-27:16:21:30
    TITA
    SQL> purge recyclebin;

    Recyclebin purged.

    SQL> show recycle_bin;
    SQL>

    NB : Si on désire purger un seul objet de la Corbeille, on peut utiliser la commande :

    PURGE {TABLE|INDEX} nom_de_l_objet;

    Niveau base de données


    Tout d’abord, il nous faut savoir vers quelle point maximum il nous est possible de revenir en arrière. Pour cela, la vue système V$FLASHBACK_DATABASE_LOG nous permettra d’obtenir la réponse :

    SQL> select to_char(oldest_flashback_time,'DD/MM/YYYY HH24:MI')
    2  from v$flashback_database_log
    3  /


    TO_CHAR(OLDEST_F
    ----------------
    23/11/2005 22:28

    Ensuite, on peut établir un FLASHBACK de la base de données. Le FLASHBACK d’une base de données implique deux points :
    - la base de donneés doit être dans l’état MONTEE en ARCHIVELOG
    - l’opération nécessite une remise à zéro des journaux de transaction (option OPEN RESETLOGS) et donc une nouvelle incarnation de la base.

    Pour revenir une heure en arrière :

    SQL> SHUTDOWN IMMEDIATE;
    SQL> STARTUP MOUNT;
    SQL> FLASHBACK DATABASE TO TIMESTAMP SYSDATE-1/24
    SQL> ALTER DATABASE OPEN RESETLOGS;

    Les annulations sur Oracle (> 9i)

    Le principe des annulations est assez simple. En effet, lorsqu’on travaille dans sa session, nous allons créer des transactions (suite d’ordres DML considérée comme indivisible, pour maintenir les données consistentes.).

    Lors de ces transactions, nous voyons nos données modifiées ... mais pas d’une autre session (principe d’isolation). La validation de la transaction sera appliquée, et donc visible par d’autres sessions, lors du COMMIT.

    Exemple :






















    Session N°1Session N°2

    SQL> select * from myids;

    ID
    -------
    1
    2
    3
    4

    SQL> select * from myids;

    ID
    -------
    1
    2
    3
    4

    SQL> insert into myids values(5);

    1 row created.

    SQL> select * from myids;

    ID
    -------
    1
    2
    3
    4
    5

    SQL> select * from myids;

    ID
    -------
    1
    2
    3
    4
    COMMIT;

    SQL> select * from myids;

    ID
    -------
    1
    2
    3
    4
    5

    SQL> select * from myids;

    ID
    -------
    1
    2
    3
    4
    5

    On voit bien dans cet exemple que la transaction n’est pas visible pour tout le monde. Et bien, on a là, tout le mécanisme des annulations.

    Les transactions qui ne sont pas enregistrées durablement dans la base sont contenues dans des segments spécifiques appelés segments d’annulation.

    Ces segments permettent d’etablir un ordre ROLLBACK et de retrouver un état en début de transaction.

    Enfin, les annulations sont également très utiles après un crash d’instance. Si il y avait eu un crash d’instance après mon ordre INSERT, toutes les transactions non validées auraient été annulés lors du redémarrage de l’instance (par SMON). Si un administrateur avait tué ma session, le mécanisme aurait été identique (annulation effectuées par PMON).

    et réellement ?


    Et bien réellement, il existe des segments spéciaux appelés RollBack Segments (RBS).

    Ces segments spéciaux sont en fait des segments circulaires (un segment étant composé d’extensions, elles-mêmes composées de Blocs Oracle), dans lequel toutes les anciennes valeurs seront enregistrées. A ce point, deux scenarii :

    • un COMMIT valide la transaction alors :

      • la valeur du SCN est incrémentée

      • le segment d’annulation est libéré pour d’autres opérations (par incrément du SCN courant)

      • les données modifiées sont inscrites durablement dans la base (dans les redo log).



    • un ROLLBACK invalide la transaction alors :

      • les données contenues dans le RBS sont remises en place dans le segment de données

      • les données d’annulations sont invalidées et le segment peut-être utilisé pour d’autres annulations.






    JPEG - 19.5 ko



    Fonctionnement de l’annulation





    NB : Si ces segments sont circulaires, cela signifie qu’ils seront limités dans le volume de la transaction pouvant être enregistrée. Un UPDATE volumineux nécessitera un RBS assez volumineux, si on désire l’annuler.

    Pourquoi est ce mieux depuis la 9i ?


    Avant la 9i, les RBS étaient généralement stockés dans un tablespace pour lequel il fallait gérer les paramètres de stockage (comme tous les tablespaces).

    Mais en plus, il fallait régler le nombre de ces RBS, les règles de stockage de chacun de ses RBS, la taille de ceux-ci devait également être soigneusement étudiée, notamment sur des batch qui, généralement, demandaient de larges espaces d’annulations... bref, il fallait être aux petits soins de ces RBS.

    A partir de la 9i, oracle a simplifié la gestion des annulations (tout en conservant le fonctionnement "Old-Fashion" des RBS). La gestion automatique des annulations se résume maintenant à :
    - un tablespace dit d’annulation ou UNDO TABLESPACE (avec le paramètrage que cela implique). Ce tablespace contient des RBS (mais ils ne sont plus géré par l’humain).
    - un paramètre de temps conservation des données ou UNDO RETENTION.

    L’UNDO management


    La gestion des annulations automatiques (AUTOMATIC UNDO MANAGEMENT en opposition au MANUAL ie. gestion des RBS), se fait en plusieurs étapes.

    La première consiste à créer un tablespace d’annulation. Par exemple, pour un tablespace UNDOTBS01 géré localement :

    CREATE UNDO TABLESPACE undotbs01 DATAFILE '/u02/orcl/undotbs01.dbf' size 500M REUSE
    AUTOEXTEND OFF
    /

    Puis, il nous faut modifier le paramètrage de notre base. Dans un premier temps il faut stipuler le fait que l’on va travailler en automatique avec notre tablespace UNDOTBS01 (si on travaille avec un SPFILE, voila ce que cela donnera) :

    ALTER SYSTEM SET UNDO_MANAGEMENT=auto SCOPE=SPFILE;
    ALTER SYSTEM SET UNDO_TABLESPACE=undotbs01 SCOPE=SPFILE;

    Il nous faut ensuite déterminer le temps de conservation de nos données. En gros, on va dire au système combien de secondes on désire conserver nos données pour annulation. Ce paramètre se nomme UNDO_RETENTION.

    Si par exemple, on règle ce paramètre à 10 minutes (soit 600 secondes), alors on considère que toutes nos données d’annulations pourront être conservées pendant 10minutes ... au delà, une erreur caractéristique se produira ... l’ORA-01555 (voir plus bas).

    ALTER SYSTEM SET UNDO_RETENTION=600 ;

    ORA-01555 ??


    Nous l’avons vu plus haut, cette erreur est générée lorsqu’une transaction s’est éxecutée pendant plus de temps que n’est configuré le paramètre UNDO_RETENTION et qu’une autre session a tenté d’accéder aux données modifiées par cette transaction.

    En effet, lorsque des blocs de données sont modifiés, nous avons vu qu’ils était mis dans un rollback segment, et la nouvelle image des données mis à jour dans le segment de données.

    Lorsqu’une autre session tente d’accéder à cette table, elle doit voir les anciennes données (image avant). Les données sont donc récupérées dans le segment d’annulation et dans la table pour fournir une lecture cohérente. L’erreur ORA-01555 est renvoyée lorsque la session concurrente (pas celle qui modifie les données) tente de lire des données dans le segment d’annulation et que ces données ne s’y trouvent plus (dépassement du temps de rétention.)

    Lorsqu’une erreur de ce type apparait, il est conseillé d’augmenter le paramètres d’UNDO_RETENTION, et donc en conséquence, de surveiller l’occupation du tablespace d’annulation.

    Gestion des performances


    Les informations de performances des annulations se trouvent la vue V$UNDOSTAT.

    D’autres vues systèmes (DBA_ROLLBACK_SEGS, V$TRANSACTION, V$ROLLSTAT, V$ROLLNAME, V$SESSION) peuvent être mises en jointures avec pour obtenir des informations sur les segments d’annulation.

    Source http://laurent.leturgez.free.fr/

    Mesure du switch des redo-log

    Ce petit script permet de savoir si nos redo log switchent trop souvent ou pas assez.

    Si ils switchent trop souvent alors cela peut engendrer des problèmes d’I/O, engendrer du redo-log contention.

    Si ils ne switchent pas assez souvent, cela peut-être génant. Notamment dans le cas de sauvegarde base éteinte. Si la totalité des annulations n’est pas stockée sur la totalité des redo log entre deux sauvegardes.

    Notons que ce script marche sur oracle 8i et 9i : NB : Avoir le SELECT sur v$loghist
    SELECT  a.sequence# ,
    to_char(a.first_time,'DD-MON-YYYY HH:MI:SS') "date début Log",
    ( b.first_time - a.first_time ) * 3600*24      "Secondes"
    FROM    v$loghist  a,   v$loghist  b
    WHERE   b.sequence#= a.sequence# + 1
    /

    Show the ten largest objects in the database

    col	owner format a15
    col segment_name format a30
    col segment_type format a15
    col mb format 999,999,999
    select owner
    , segment_name
    , segment_type
    , mb
    from (
    select owner
    , segment_name
    , segment_type
    , bytes / 1024 / 1024 "MB"
    from dba_segments
    order by bytes desc
    )
    where rownum < 11
    /

    How large is the database

    col "Database Size" format a20
    col "Free space" format a20
    col "Used space" format a20
    select round(sum(used.bytes) / 1024 / 1024 / 1024 ) || ' GB' "Database Size"
    , round(sum(used.bytes) / 1024 / 1024 / 1024 ) -
    round(free.p / 1024 / 1024 / 1024) || ' GB' "Used space"
    , round(free.p / 1024 / 1024 / 1024) || ' GB' "Free space"
    from (select bytes
    from v$datafile
    union all
    select bytes
    from v$tempfile
    union all
    select bytes
    from v$log) used
    , (select sum(bytes) as p
    from dba_free_space) free
    group by free.p
    /

    Show user info including os pid

    col "SID/SERIAL" format a10
    col username format a15
    col osuser format a15
    col program format a40
    select s.sid || ',' || s.serial# "SID/SERIAL"
    , s.username
    , s.osuser
    , p.spid "OS PID"
    , s.program
    from v$session s
    , v$process p
    Where s.paddr = p.addr
    order by to_number(p.spid)
    /

    Sessions sorted by logon time

    set lines 100 pages 999
    col ID format a15
    col osuser format a15
    col login_time format a14
    select username
    , osuser
    , sid || ',' || serial# "ID"
    , status
    , to_char(logon_time, 'hh24:mi dd/mm/yy') login_time
    , last_call_et
    from v$session
    where username is not null
    order by login_time
    /

    Show all connected users

    set lines 100 pages 999
    col ID format a15
    select username
    , sid || ',' || serial# "ID"
    , status
    , last_call_et "Last Activity"
    from v$session
    where username is not null
    order by status desc
    , last_call_et desc
    /

    Purge old files using “find” command

    I've noticed that one of our interface directories has a lot of old files, some of them were more than a year old. I checked it with our implementers and it turns out that we can delete all files that are older than 60 days.

    I decided to write a (tiny) shell script to purge all files older than 60 days and schedule it with crontab, this way I won't deal with it manually. I wrote a find command to identify and delete those files. I started with the following command:





    find /interfaces/inbound -mtime +60 -type f -maxdepth 1 -exec rm {} \;

    It finds and deletes all files in directory /interface/inbound that are older than 60 days.
    "-maxdepth 1" -> find files in current directory only. Don't look for files in sub directories.

    After packing it in a shell script I got a request to delete "csv" files only. No problem... I added the "-name" to the find command:





    find /interfaces/inbound -name "*.csv" -mtime +60 -type f -maxdepth 1 -exec rm {} \;

    All csv files in /interface/inbound that are older than 60 days will be deleted.

    But then, the request had changed, and I was asked to delete "*.xls" files further to "*.csv" files. At this point things went complicated for me since I'm not a shell script expert...

    I tried several things, like add another "-name" to the find command:





    find /interfaces/inbound -name "*.csv" -name "*.xls" -mtime +60 -type f -maxdepth 1 -exec rm {} \;

    But no file was deleted. Couple of moments later I understood that I'm trying to find csv files which is also xls files... (logically incorrect of course).

    After struggling a liitle with the find command, I managed to make it works:





    find /interfaces/inbound \( -name "*.csv" -o -name "*.xls" \) -mtime +60 -type f -maxdepth 1 -exec rm {} \;

    Startup time

    select	to_char(startup_time, 'HH24:MI DD-MON-YY') "Startup time"
    from v$instance
    /

    lundi 26 octobre 2009

    Is java installed in the database?

    This will return 9000'ish if it is...

    select 	count(*)
    from all_objects
    where object_type like '%JAVA%'
    and owner = 'SYS'
    /

    dimanche 25 octobre 2009

    List open cursors per user

    set pages 999
    select sess.username
    , sess.sid
    , sess.serial#
    , stat.value cursors
    from v$sesstat stat
    , v$statname sn
    , v$session sess
    where sess.username is not null
    and sess.sid = stat.sid
    and stat.statistic# = sn.statistic#
    and sn.name = 'opened cursors current'
    order by value
    /

    samedi 24 octobre 2009

    Show latent parameter changes

    select name
    , value
    from v$parameter
    where ismodified != 'FALSE'
    /

    jeudi 22 octobre 2009

    Reset/Unset a spfile parameter

    Setting a parameter to =' ' often isn't enough. Do this instead...

    alter system reset <parameter> scope=spfile sid='*'
    /

    The sid='*' bit is always necessary, even in non RAC database.

    mercredi 21 octobre 2009

    Show non-default parameters

    select name
    , value
    from v$parameter
    where ismodified != 'FALSE'
    /

    or for name='parameter' style...

    set pages 999 lines 100
    select name || '=' || decode(type, 2, '''') || value
    || decode(type, 2, '''') parameter
    from v$parameter
    where isdefault = 'FALSE'
    and value is not null
    order by name
    /