mardi 19 octobre 2010

How to Use DVSYS.DBMS_MACADM procedures to Prohibit Data Access to Users

A realm is created on HR schema objects : how to prohibit the access to HR objects to the accounts SYSTEM and SYS without using the Database Vault Administrator GUI tool (dva) ?

1. Realm creation and schema objects selection :
SQL> execute dvsys.dbms_macadm.CREATE_REALM('Test HR','Protects data from DBA access','Y', 1);

SQL>  execute DVSYS.DBMS_MACADM.ADD_OBJECT_TO_REALM ( 'Test HR', 'HR', '%', '%');

or
begin
dvsys.dbms_macadm.add_object_to_realm ( realm_name => 'Test HR'  , object_owner => 'HR'  ,
object_name => '%'  ,
object_type => '%');
end;
/

2. Tests : connect as SYSTEM user to check that SYSTEM user has no access to HR data :
SQL> conn system/manager
Connected.

SQL> select * from HR.dept;

DEPTNO DNAME LOC
----------   -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO

Connect as SYS user to check that SYS user has no access to HR data :
SQL> conn sys/xxx as sysdba
Connected

SQL> select * from HR.dept;

select * from HR.dept
*
ERROR at line 1:
ORA-01031: insufficient privileges

Solution


1. Check  if the user SYSTEM user has been granted the object privilege SELECT on HR.DEPT  table.

Yes , it had been granted. Hence the realm does not prohibit the SYSTEM user from selecting data from HR table as Realm only protects data from being accessed with system privileges.

2. Revoke the object privilege on HR schema tables from SYSTEM user.
SQL> conn sys/xxx as sysdba
Connected.

SQL> revoke select on hr.dept from system;
Revoke succeeded.

SQL> conn dvowner/xxx
Connected.

SQL> execute dvsys.dbms_macadm.CREATE_REALM('Test HR','Protects data from DBA access','Y', 1);
PL/SQL procedure successfully completed.

SQL> execute DVSYS.DBMS_MACADM.ADD_OBJECT_TO_REALM ('Test HR', 'HR', '%', '%');
PL/SQL procedure successfully completed.

SQL> conn system/xxx
Connected.

SQL> select * from HR.dept;
select * from HR.dept
*
ERROR at line 1:
ORA-01031: insufficient privileges

user_has_auth_in_realm

jeudi 14 octobre 2010

How to Drop and Recreate the DB Control Repository

Information in this article applies to:
Oracle Enterprise Manager 10g Release 1 (10.1)
Oracle Enterprise Manager 10g Release 1 (10.2)
So:

Goal:
Provide detailed instructions on how to remove and recreate the DB Control Repository in a 10g Database

1. Stop the DB Console if not already stoped: $ORACLE_HOME/bin/emctl stop dbconsole

On Unix Systems:
1.  Run 10g DB $OH/sysman/admin/emdrep/bin/RepManager
NOTE: Type RepManager by itself and you will get a list of options available for the tool

2.  After you delete the Repository, you will need to remove the configuration files.  To do this, run emca -x

3.  Finally, to created both the schema objects and configuration files run emca and answer the prompts

On Windows Systems:
NOTE: The RepManager script in 10.1.0.2.0 does not work correctly on Windows.  For this reason, you will have to manually drop the sysman schema and management objects.  The RepManager script should be fully functional in the 10.1.0.3.0 release.

1.  Logon SQLPLUS as user SYS or SYSTEM, and drop the sysman account and mangement objects:

•    SHUTDOWN IMMEDIATE;

•    STARTUP RESTRICT;

•    EXEC sysman.emd_maintenance.remove_em_dbms_jobs;

•    EXEC sysman.setEMUserContext(”,5);

•    REVOKE dba FROM sysman;

•    DECLARE
•        CURSOR c1 IS
•          SELECT owner, synonym_name name
•          FROM   dba_synonyms
•          WHERE  table_owner = ‘SYSMAN’
•        ;
•      BEGIN
•        FOR r1 IN c1 LOOP
•            IF r1.owner = ‘PUBLIC’ THEN
•               EXECUTE IMMEDIATE ‘DROP PUBLIC SYNONYM ‘||r1.name;
•            ELSE
•               EXECUTE IMMEDIATE ‘DROP SYNONYM ‘||r1.owner||’.'||r1.name;
•            END IF;
•        END LOOP;
•      END;

•    DROP USER mgmt_view CASCADE;

•    DROP ROLE mgmt_user;

•    DROP USER sysman CASCADE;

•    ALTER SYSTEM DISABLE RESTRICTED SESSION;

2.  At a command Prompt, run emca -x sid to remove the existing configuration files and then recreate the schema and configuration files with just emca.  For example:
emca -x <SID>
emca

For additional information on emca commandline options, please see the Oracle Enterprise Manager 10g Advanced Configuration Guide.  The Guide is available from OTN.  http://download-west.oracle.com/docs/pdf/B12013_01.pdf

1.    Logon SQLPLUS as user SYS or SYSTEM, and drop the sysman account and mangement objects:

SHUTDOWN IMMEDIATE;
STARTUP RESTRICT;
A. Drop the repository manually using following procedure.

Stop the DB Console if not already stoped: $ORACLE_HOME/bin/emctl stop dbconsole
Log on in sqlplus as sys user and run the following commands:

a. drop user sysman cascade;
b. drop role MGMT_USER;
c. drop user MGMT_VIEW cascade;
d. drop public synonym MGMT_TARGET_BLACKOUTS;
e. drop public synonym SETEMVIEWUSERCONTEXT;
(Note If any of the above statement failed and saying object doesn’t exist just skip that
statement)

B. Drop the DB Control configuration files.

Remove the following directories:
$rm -rf $ORACLE_HOME/<HOSTNAME_SID>
$rm -rf $ORACLE_HOME/oc4j/j2ee/OC4J_DBConsole_<HOSTNAME_SID>

On Windows systems just delete the folders above from the Explorer.

Remove the entries for the old dbconsole information from the $ORACLE_HOME/install/portlist.ini file

On Windows, use delsrv.exe (available for free download from Microsoft) or the ‘sc’ command to remove the ORACLEDBCONSOLE<SID> service if it still remains.

1.    Remove the current Database Control configuration files and repository

1.1 set ORACLE_HOME and ORACLE_SID environment variable
1.2 <ORACLE_HOME>/bin/emca -x <SID>
1.3 <ORACLE_HOME>/sysman/admin/emdrep/bin/RepManager <hostname> <actual_listener_port> <sid> -action drop
1.4 Manually delete any <hostname>_<sid> directories in <ORACLE_HOME> and <ORACLE_HOME>/oc4j/j2ee
1.5 On Windows, use delsrv.exe (available for free download from Microsoft) or the ‘sc’ command to remove the ORACLEDBCONSOLE<SID> service if it still remains.

2. Recreate the Database Control configuration files and repository

2.1 <ORACLE_HOME>/bin/emca

vendredi 8 octobre 2010

Exploring the Oracle Database Architecture

Explain the Memory Structures


  • System Global Area (SGA) - The SGA is a group of shared memory structures, known as SGA components, that contain data and control information for one Oracle Database instance. The SGA is shared by all server and background processes. Examples of data stored in the SGA include cached data blocks and shared SQL areas.

  • Program Global Area (PGA) - A PGA is a nonshared memory region that contains data and control information exclusively for use by an Orcle process. The PGA is created by the Oracle Database when an Oracle process is started.

    One PGA exists for each server process and background process. The collection of individual PGAs is the total instance PGA, or instance PGA. Database initialization parameters set the size of the instance PGA, not individual PGAs.

  • User Global Area (UGA) - The UGA is memory associated with a user session.

  • Software Code Areas - Software code areas are portions of memory used to store code that is being run or can be run. Oracle Database code is stored in a software area that is typically at a different location from user programs-a more exclusive or protected location.



Describe the Process Structures


  • Client processes run the application or Oracle tool code

  • Oracle Processes run the Oracle Database code. Oracle processes include the following subtypes:

    • Background Processes start with the database instance and perform maintenance tasks such as performing instance recovery, cleaning up processes, writing redo buffers to disk, and so on.

    • Server Processes perform work based on a client request.

      For example, these processes parse SQL queries, place them in the shared pool, create and execute a query plan for each query, and read buffers from the database buffer cache or from disk.

    • Slave Processes perform additional tasks for a background or server process.





Overview of Storage Structures


  • Physical

    • Data Files and Temp Files - A data file is a physical file on disk that was created by Oracle Database and contains data structures such as tables and indexes. A temp file is a data file that belongs to a temporary tablespace. The data is written to these files in an Oracle proprietary format that cannot by read by other programs.

    • Control Files - is a root file that tracks the physical components of the database.

    • Online Redo Log - is a set of file containing records of changes made to data.



  • Logical

    • Data Blocks - Oracle Database manages the logical storage space in the data files of a database in units called data blocks, also called logical blocks, Oracle blocks, or pages. An Oracle data block is the minimum unit of database I/O.

    • Extents - is a logical unit of database storage space allocation made up of contiguous data blocks. Data blocks in an extent are logically contiguous but can be physically spread out on disk because of RAID striping and file system implementations.

    • Segments - is a seg of extents that contains all the data for a logical storage structure within a tablespace. For example, Oracle Database allocates one or more extents to form the data segment for a table. The database also allocates one or more extents to form the index segment for a table.

    • Tablespaces - is a logical storage container for segments. Segments are database objects, such as tables and indexes, that consume storage space. At the physical level, a tablespace stores data in one or more data files or temp files. A database must have the SYSTEM and SYSAUX tablespaces.




mardi 13 juillet 2010

Error 1044 in MySQL: Access denied when using LOCK TABLES



I got an error while using mysqldump

mysqldump: Got error: 1044: Access denied for user x@y to database z when using LOCK TABLES

To solve this problem, either ask you administrator to grant you the lock privileges, or use the following command instead.

mysqldump -u username -p database --single-transaction >dump.sql

vendredi 11 juin 2010

ORA-08102: index key not found, obj# ... Solution to resolve

When updated table, found error:


ORA-08102: index key not found, obj# 116528, file 96, block 8795 (2)

This problem I had resolved by rebuild indexes (ORA-08102 on index objects). But today I can not use this solution:


What is about this error?


Error: ORA-08102

Oracle10g Message
Text: index key not found, obj# %s, file %s, block %s (%s)
Oracle 9.2 or Earlier Error Message
Text: index key not found, obj# %s, dba %s (%s)

ORA-08102 is a mismatch between the key(s) stored in the index and the values stored in the table.

What typically happens is the index is built and at some future time, some type of corruption occurs, either in the table or index, to cause the mismatch.

When know about is, what is the way to resolve it? How?

-> Check object, that have the problem, if it's index... try to rebuild:

SQL>select object_name, object_type
from dba_objects
where object_id = [obj# in ORA-08102]

Example:

SQL> select object_name, object_type
from dba_objects
where object_id = 116528;

OBJECT_NAME OBJECT_TYPE
--------------- -------------------
XXX_IDX07 INDEX PARTITION

And then rebuild it.

SQL> alter index [index_name] rebuild [online];

OR "partition index"

SQL> select partition_name from dba_segments where TABLESPACE_NAME = (select tablespace_name from dba_data_files where file_id=[file_id from ORA-08102] ) and SEGMENT_NAME=[object_name from dba_objects];

SQL> select partition_name from dba_segments where TABLESPACE_NAME = (select tablespace_name from dba_data_files where file_id=96) and SEGMENT_NAME='XXX_IDX07';

PARTITION_NAME
------------------------------
XXX_PART1

SQL> alter index [index_name] rebuild partition [partition_name] [online];

if ORA-08102 on table objects -> resolve by "analyze table ... validate structure.. or use dbv to check corrupt on table" ) don't forget check error on OS , trace file and alert log file.

-> (ORA-08102 on index objects), if rebuilt indexes, But still error ORA-08102: Drop that index and recreate it.

jeudi 3 juin 2010

Freeing up db_recovery_file_dest reclaimable space.

I temporarily reduced db_recovery_file_dest_size to force the database to free reclaimable space from recovery_file_dest.

This reduced disk usage of ASM_ORADATA217 by over 320GB.


SYS>show parameter recovery

NAME TYPE VALUE
----------------------------------- ------------------------- ---------------------
db_recovery_file_dest string +ASM_ORADATA217
db_recovery_file_dest_size big integer 375G

SYS>select * from V$FLASH_RECOVERY_AREA_USAGE;

FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
------------ ------------------ ------------------------- ---------------
CONTROLFILE 0 0 0
ONLINELOG 0 0 0
ARCHIVELOG 0 0 0
BACKUPPIECE 0 0 0
IMAGECOPY 0 0 0
FLASHBACKLOG 88.7 83.73 1991


SYS>alter system set db_recovery_file_dest_size=20G;

System altered.

SYS>alter system set db_recovery_file_dest_size=375G;

System altered.

SYS>alter system switch logfile;

System altered.

SYS>select * from V$FLASH_RECOVERY_AREA_USAGE;

FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
------------ ------------------ ------------------------- ---------------
CONTROLFILE 0 0 0
ONLINELOG 0 0 0
ARCHIVELOG 0 0 0
BACKUPPIECE 0 0 0
IMAGECOPY 0 0 0
FLASHBACKLOG 5.32 .35 135

mardi 1 juin 2010

Cannot export empty tables in 11.2

In 11g there is no segment allocated to the table until you insert, so the EXP utility fails to export empty tables.

[oracle@dev-oranode-221 ~]$ sqlplus a/a

SQL*Plus: Release 11.2.0.1.0 Production on Mon Nov 30 14:38:06 2009

Copyright (c) 1982, 2009, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining, Oracle Database Vault
and Real Application Testing options
SQL> create table imp_test(id integer primary key);

Table created.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining, Oracle Database Vault
and Real Application Testing options
[oracle@dev-oranode-221 ~]$ exp a/a

Export: Release 11.2.0.1.0 - Production on Mon Nov 30 14:37:17 2009

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining, Oracle Database Vault
and Real Application Testing options
Enter array fetch buffer size: 4096 >

Export file: expdat.dmp >

(1)E(ntire database), (2)U(sers), or (3)T(ables): (2)U > 3

Export table data (yes/no): yes >

Compress extents (yes/no): yes >

Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8MSWIN1252 character set (possible charset conversion)

About to export specified tables via Conventional Path ...
Table(T) or Partition(T:P) to be exported: (RETURN to quit) > imp_test

EXP-00011: A.IMP_TEST does not exist
Table(T) or Partition(T:P) to be exported: (RETURN to quit) >

Export terminated successfully with warnings.
[oracle@dev-oranode-221 ~]$ sqlplus a/a

SQL*Plus: Release 11.2.0.1.0 Production on Mon Nov 30 14:38:06 2009

Copyright (c) 1982, 2009, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining, Oracle Database Vault
and Real Application Testing options

SQL> insert into imp_test values(1);

1 row created.

SQL> commit;

Commit complete.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining, Oracle Database Vault
and Real Application Testing options
[oracle@dev-oranode-221 ~]$ exp a/a

Export: Release 11.2.0.1.0 - Production on Mon Nov 30 14:38:32 2009

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining, Oracle Database Vault
and Real Application Testing options
Enter array fetch buffer size: 4096 >

Export file: expdat.dmp >

(1)E(ntire database), (2)U(sers), or (3)T(ables): (2)U > 3

Export table data (yes/no): yes >

Compress extents (yes/no): yes >

Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8MSWIN1252 character set (possible charset conversion)

About to export specified tables via Conventional Path ...
Table(T) or Partition(T:P) to be exported: (RETURN to quit) > imp_test

. . exporting table IMP_TEST 1 rows exported
Table(T) or Partition(T:P) to be exported: (RETURN to quit) >

in oracle11G, Oracle don't allocate the space segment for newly-created tables. At the same time, Exp just is used to export the data objects allocated to the segments. That is why your failed to exp the data. You can exppor the empty segment object by changing the parameter "deferred_segment_creation" from true to false

vendredi 28 mai 2010

Automatic Workload Repository (AWR) in Oracle Database 10g

Oracle have provided many performance gathering and reporting tools over the years. Originally the UTLBSTAT/UTLESTAT scripts were used to monitor performance metrics. Oracle8i introduced the Statspack functionality which Oracle9i extended. In Oracle 10g statspack has evolved into the Automatic Workload Repository (AWR).

  • AWR Features


AWR Features


The AWR is used to collect performance statistics including:

  • Wait events used to identify performance problems.

  • Time model statistics indicating the amount of DB time associated with a process from the V$SESS_TIME_MODEL and V$SYS_TIME_MODEL views.

  • Active Session History (ASH) statistics from the V$ACTIVE_SESSION_HISTORY view.

  • Some system and session statistics from the V$SYSSTAT and V$SESSTAT views.

  • Object usage statistics.

  • Resource intensive SQL statements.


The repository is a source of information for several other Oracle 10g features including:

  • Automatic Database Diagnostic Monitor

  • SQL Tuning Advisor

  • Undo Advisor

  • Segment Advisor


Snapshots


By default snapshots of the relevant data are taken every hour and retained for 7 days. The default values for these settings can be altered using:

BEGIN
DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(
retention => 43200, -- Minutes (= 30 Days). Current value retained if NULL.
interval => 30); -- Minutes. Current value retained if NULL.
END;
/


The changes to the settings are reflected in the DBA_HIST_WR_CONTROL view. Typically the retention period should capture at least one complete workload cycle. If you system has monthly archive and loads a 1 month retention time would be more beneficial that the default 7 days. An interval of "0" switches off snapshot collection, which in turn stops much of the self-tuning functionality, hence this is not recommended. Automatic collection is only possible if the STATISTICS_LEVEL parameter is set to TYPICAL or ALL. If the value is set to BASIC manual snapshots can be taken, but they will be missing some statistics.

Extra snapshots can be taken and existing snapshots can be removed using:

EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;

BEGIN
DBMS_WORKLOAD_REPOSITORY.drop_snapshot_range (
low_snap_id => 22,
high_snap_id => 32);
END;
/


Snapshot information can be queried from the DBA_HIST_SNAPSHOT view.

Baselines


A baseline is a pair of snapshots that represents a specific period of usage. Once baselines are defined they can be used to compare current performance against similar periods in the past. You may wish to create baseline to represent a period of batch processing like:

BEGIN
DBMS_WORKLOAD_REPOSITORY.create_baseline (
start_snap_id => 210,
end_snap_id => 220,
baseline_name => 'batch baseline');
END;
/


The pair of snapshots associated with a baseline are retained until the baseline is explicitly deleted:

BEGIN
DBMS_WORKLOAD_REPOSITORY.drop_baseline (
baseline_name => 'batch baseline',
cascade => FALSE); -- Deletes associated snapshots if TRUE.
END;
/


Baseline information can be queried from the DBA_HIST_BASELINE view.

Workload Repository Views


The following workload repository views are available:

  • V$ACTIVE_SESSION_HISTORY - Displays the active session history (ASH) sampled every second.

  • V$METRIC - Displays metric information.

  • V$METRICNAME - Displays the metrics associated with each metric group.

  • V$METRIC_HISTORY - Displays historical metrics.

  • V$METRICGROUP - Displays all metrics groups.

  • DBA_HIST_ACTIVE_SESS_HISTORY - Displays the history contents of the active session history.

  • DBA_HIST_BASELINE - Displays baseline information.

  • DBA_HIST_DATABASE_INSTANCE - Displays database environment information.

  • DBA_HIST_SNAPSHOT - Displays snapshot information.

  • DBA_HIST_SQL_PLAN - Displays SQL execution plans.

  • DBA_HIST_WR_CONTROL - Displays AWR settings.


Workload Repository Reports


Oracle provide two scripts to produce workload repository reports (awrrpt.sql and awrrpti.sql). They are similar in format to the statspack reports and give the option of HTML or plain text formats. The two reports give essential the same output but the awrrpti.sql allows you to select a single instance. The reports can be generated as follows:

@$ORACLE_HOME/rdbms/admin/awrrpt.sql
@$ORACLE_HOME/rdbms/admin/awrrpti.sql


The scripts prompt you to enter the report format (html or text), the start snapshot id, the end snapshot id and the report filename. The resulting report can be opend in a browser or text editor accordingly.

AWR Report: To generate a text report for a range of snapshot Ids, run the awrrpt.sql script at the SQL prompt:

@$ORACLE_HOME/rdbms/admin/awrrpt.sql

First, you need to specify whether you want an HTML or a text report.
Enter value for report_type: text

Specify the number days for which you want to list snapshot Ids.
Enter value for num_days: 2


After the list displays, you are prompted for the beginning and ending snapshot Id for the workload repository report.
Enter value for begin_snap: 100

Enter value for end_snap: 120

Next, accept the default report name or enter a report name. The default name is accepted in the following example:
Enter value for report_name:

Using the report name awrrpt_1_100_120

The workload repository report is thus generated.

Below is a sample output of the AWR Report.

WORKLOAD REPOSITORY report for























DB NameDB IdInstanceInst numReleaseClusterHost
TEST118468335TEST110.1.0.4.0NOtest-db.ora.hp







































Snap IdSnap TimeSessionsCursors/Session
Begin Snap:83411-Jul-06 01:27:311732177.7
End Snap:83611-Jul-06 02:27:362038196.3
Elapsed:60.08 (mins)
DB Time:381.35 (mins)

Report Summary


Cache Sizes (end)















Buffer Cache:1,376MStd Block Size:8K
Shared Pool Size:1,856MLog Buffer:10,240K

Load Profile




































































Per SecondPer Transaction
Redo size:2,389,415.2727,164.76
Logical reads:108,397.971,232.35
Block changes:15,015.45170.71
Physical reads:158.281.80
Physical writes:247.042.81
User calls:3,714.2642.23
Parses:797.769.07
Hard parses:1.970.02
Sorts:1,014.7911.54
Logons:0.770.01
Executes:7,300.4683.00
Transactions:87.96
















% Blocks changed per Read:13.85Recursive Call %:76.60
Rollback per transaction %:22.64Rows per Sort:12.46

Instance Efficiency Percentages (Target 100%)

































Buffer Nowait %:99.99Redo NoWait %:100.00
Buffer Hit %:99.86In-memory Sort %:100.00
Library Hit %:99.98Soft Parse %:99.75
Execute to Parse %:89.07Latch Hit %:99.57
Parse CPU to Parse Elapsd %:84.64% Non-Parse CPU:96.72

Shared Pool Statistics























BeginEnd
Memory Usage %:51.2871.99
% SQL with executions>1:85.4093.22
% Memory for SQL w/exec>1:78.1685.79

Top 5 Timed Events












































EventWaitsTime(s)Percent Total DB TimeWait Class
CPU time17,10774.76
db file sequential read498,2142,77312.12User I/O
log file sync129,0312,0108.78Commit
log file parallel write445,3915172.26System I/O
latch: library cache26,609141.62

mercredi 26 mai 2010

A Unix/Linux "find" Command Tutorial



The find command is used to locate files on a Unix or Linux system.  find will search any set of directories you specify for files that match the supplied search criteria.  You can search for files by name, owner, group, type, permissions, date, and other criteria.  The search is recursive in that it will search all subdirectories too.  The syntax looks like this:
find where-to-look criteria what-to-do

All arguments to find are optional, and there are defaults for all parts.  (This may depend on which version of find is used.  Here we discuss the freely available GNU version of find, which is the version available on YborStudent.)  For example where-to-look defaults to . (that is, the current working directory), criteria defaults to none (that is, show all files), and what-to-do (known as the find action) defaults to -print (that is, display the names of found files to standard output).  Technically the criteria and actions are all known as find primaries.

For example:
find

will display the pathnames of all files in the current directory and all subdirectories.  The commands
find . -print
find -print
find .

do the exact same thing.  Here's an example find command using a search criterion and the default action:
find / -name foo

This will search the whole system for any files named foo and display their pathnames.  Here we are using the criterion -name with the argument foo to tell find to perform a name search for the filename foo.  The output might look like this:
/home/wpollock/foo
/home/ua02/foo
/tmp/foo

If find doesn't locate any matching files, it produces no output.

The above example said to search the whole system, by specifying the root directory (/) to search.  If you don't run this command as root, find will display a error message for each directory on which you don't have read permission.  This can be a lot of messages, and the matching files that are found may scroll right off your screen.  A good way to deal with this problem is to redirect the error messages so you don't have to see them at all:
find / -name foo 2>/dev/null

You can specify as many places to search as you wish:
find /tmp /var/tmp . $HOME -name foo



Advanced Features And Applications:





The -print action lists the names of files separated by a newline.  But it is common to pipe the output of find into xargs, which uses a space to separate file names.  This can lead to a problem if any found files contain spaces in their names, as the output doesn't use any quoting.  In such cases, when the output of find contains a file name such as foo bar and is piped into another command, that command sees two file names, not one file name containing a space.  Even without using xargs you could have a problem if the file name contains a newline character.

In such cases you can specify the action -print0 instead.  This lists the found files separated not with a newline but with a null (or NUL) character, which is not a legal character in Unix or Linux file names.  Of course the command that reads the output of find must be able to handle such a list of file names.  Many commands commonly used with find (such as tar or cpio) have special options to read in file names separated with NULs instead of spaces.

You can use shell-style wildcards in the -name search argument:
find . -name foo\*bar

This will search from the current directory down for foo*bar (that is, any filename that begins with foo and ends with bar).  Note that wildcards in the name argument must be quoted so the shell doesn't expand them before passing them to find.  Also, unlike regular shell wildcards, these will match leading periods in filenames.  (For example find -name \*.txt.)

You can search for other criteria beside the name.  Also you can list multiple search criteria.  When you have multiple criteria any found files must match all listed criteria.  That is, there is an implied Boolean AND operator between the listed search criteria.  find also allows OR and NOT Boolean operators, as well as grouping, to combine search criteria in powerful ways (not shown here.)

Here's an example using two search criteria:
find / -type f -mtime -7 | xargs tar -rf weekly_incremental.tar
gzip weekly_incremental.tar

will find any regular files (i.e., not directories or other special files) with the criteria -type f, and only those modified seven or fewer days ago (-mtime -7).  Note the use of xargs, a handy utility that coverts a stream of input (in this case the output of find) into command line arguments for the supplied command (in this case tar, used to create a backup archive).

Using the tar option -c is dangerous here;  xargs may invoke tar several times if there are many files found and each -c will cause tar to over-write the previous invocation.  The -r option appends files to an archive.  Other options such as those that would permit filenames containing spaces would be useful in a production quality backup script.

Another use of xargs is illustrated below.  This command will efficiently remove all files named core from your system (provided you run the command as root of course):
find / -name core | xargs /bin/rm -f
find / -name core -exec /bin/rm -f '{}' \; # same thing
find / -name core -delete # same if using Gnu find

(The last two forms run the rm command once per file, and are not as efficient as the first form.  However the first form is safer if rewritten to use -print0.)

One of my favorite of the find criteria is used to locate files modified less than 10 minutes ago.  I use this right after using some system administration tool, to learn which files got changed by that tool:
find / -mmin -10

(This search is also useful when I've downloaded some file but can't locate it, only in that case -cmin may work better.  Keep in mind neither of these criteria is standard; -mtime and -ctime are standard, but use days and not minutes.)

Another common use is to locate all files owned by a given user (-user username).  This is useful when deleting user accounts.

You can also find files with various permissions set.  -perm /permissions means to find files with any of the specified permissions on, -perm -permissions means to find files with all of the specified permissions on, and -perm permissions means to find files with exactly permissionsPermissions can be specified either symbolically (preferred) or with an octal number.  The following will locate files that are writeable by others (including symlinks, which should be writeable by all):
find . -perm -o=w

(Using -perm is more complex than this example shows.  You should check both the POSIX documentation for find (which explains how the symbolic modes work) and the Gnu find man page (which describes the Gnu extensions).

When using find to locate files for backups, it often pays to use the -depth option (really a criterion that is always true), which forces the output to be depth-first—that is, files first and then the directories containing them.  This helps when the directories have restrictive permissions, and restoring the directory first could prevent the files from restoring at all (and would change the time stamp on the directory in any case).  Normally, find returns the directory first, before any of the files in that directory.  This is useful when using the -prune action to prevent find from examining any files you want to ignore:
find / -name /dev -prune | xargs tar ...

When specifying time with find options such as -mmin (minutes) or -mtime (24 hour periods, starting from now), you can specify a number n to mean exactly n, -n to mean less than n, and +n to mean more than n.

Fractional 24-hour periods are truncated!  That means that find -mtime +1 says to match files modified two or more days ago.

For example:
find . -mtime 0   # find files modified between now and 1 day ago
# (i.e., within the past 24 hours)
find . -mtime -1 # find files modified less than 1 day ago
# (i.e., within the past 24 hours, as before)
find . -mtime 1 # find files modified between 24 and 48 hours ago
find . -mtime +1 # find files modified more than 48 hours ago

find . -mmin +5 -mmin -10 # find files modified between
# 6 and 9 minutes ago

Using the -printf action instead of the default -print is useful to control the output format better than you can with ls or dir.  You can use find with -printf to produce output that can easily be parsed by other utilities or imported into spreadsheets or databases.  See the man page for the dozens of possibilities with the -printf action.  (In fact find with -printf is more versatile than ls and is the preferred tool for forensic examiners even on Windows systems, to list file information.)  For example the following displays non-hidden (no leading dot) files in the current directory only (no subdirectories), with an custom output format:
find . -maxdepth 1 -name '[!.]*' -printf 'Name: %16f Size: %6s\n'



-maxdepth is a Gnu extension.  On a modern, POSIX version of find you could use this:
find . -path './*' -prune ...

On any version of find you can use this more complex (but portable) code:
find . ! -name . -prune ...

which says to prune (don't descend into) any directories except ..

Note that -maxdepth 1 will include . unless you also specify -mindepth 1.  A portable way to include . is:
 find . \( -name . -o -prune \) ...

[This information posted by Stephane Chazelas, on 3/10/09 in newsgroup comp.unix.shell.]


As a system administrator you can use find to locate suspicious files (e.g., world writable files, files with no valid owner and/or group, SetUID files, files with unusual permissions, sizes, names, or dates).  Here's a final more complex example (which I saved as a shell script):
find / -noleaf -wholename '/proc' -prune \
-o -wholename '/sys' -prune \
-o -wholename '/dev' -prune \
-o -wholename '/windows-C-Drive' -prune \
-o -perm -2 ! -type l ! -type s \
! \( -type d -perm -1000 \) -print

This says to seach the whole system, skipping the directories /proc, /sys, /dev, and /windows-C-Drive (presumably a Windows partition on a dual-booted computer).  The Gnu -noleaf option tells find not to assume all remaining mounted filesystems are Unix file systems (you might have a mounted CD for instance).  The -o is the Boolean OR operator, and ! is the Boolean NOT operator (applies to the following criteria).

So these criteria say to locate files that are world writable (-perm -2, same as -o=w) and NOT symlinks (! -type l) and NOT sockets (! -type s) and NOT directories with the sticky (or text) bit set (! \( -type d -perm -1000 \)).  (Symlinks, sockets and directories with the sticky bit set are often world-writable and generally not suspicious.)

A common request is a way to find all the hard links to some file.  Using ls -li file will tell you how many hard links the file has, and the inode number.  You can locate all pathnames to this file with:
  find mount-point -xdev -inum inode-number

Since hard links are restricted to a single filesystem, you need to search that whole filesystem so you start the search at the filesystem's mount point.  (This is likely to be either /home or / for files in your home directory.)  The -xdev options tells find to not search any other filesystems.

(While most Unix and all Linux systems have a find command that supports the -inum criterion, this isn't POSIX standard.  Older Unix systems provided the ncheck utility instead that could be used for this.)



Using -exec Efficiently:





The -exec option to find is great, but since it runs the command listed for every found file it isn't very efficient.  On a large system this makes a difference!  One solution is to combine find with xargs as discussed above:
  find whatever... | xargs command

However this approach has two limitations.  Firstly not all commands accept the list of files at the end of the command.  A good example is cp:
find . -name \*.txt | xargs cp /tmp  # This won't work!

(Note the Gnu version of cp has a non-POSIX option -t for this, and xargs has options to handle this too.)

Secondly filenames may contain spaces or newlines, which would confuse the command used with xargs.  (Again Gnu tools have options for that, find ... -print0 |xargs -0 ....)

There are POSIX (but non-obvious) solutions to both problems.  An alternate form of -exec ends with a plus-sign, not a semi-colon.  This form collects the filenames into groups or sets, and runs the command once per set.  (This is exactly what xargs does, to prevent argument lists from becoming too long for the system to handle.)  In this form the {} argument expands to the set of filenames.  For example:
find / -name core -exec /bin/rm -f '{}' +

This form of -exec can be combined with a shell feature to solve the other problem (names with spaces).  The POSIX shell allows us to use:
sh -c 'command-line' [ command-name [ args... ] ]

(We don't usually care about the command-name, so X, dummy, or inline cmd is often used.)  Here's an example of efficiently copying found files to /tmp, in a POSIX-compliant way (Posted on comp.unix.shell netnews newsgroup on Oct. 28 2007 by Stephane CHAZELAS):
find . -name '*.txt' -type f \
-exec sh -c 'exec cp -f "$@" /tmp' find-copy {} +



Common Gotcha:





If the given expression to find does not contain any of the action primaries -exec, -ok, or -print, the given expression is effectively replaced by:
find \( expression \) -print

The implied parenthesis can cause unexpected results.  For example, consider these two similar commands:
$ find -name tmp -prune -o -name \*.txt
./bin/data/secret.txt
./tmp
./missingEOL.txt
./public_html/graphics/README.txt
./datafile2.txt
./datafile.txt

$ find -name tmp -prune -o -name \*.txt -print
./bin/data/secret.txt
./missingEOL.txt
./public_html/graphics/README.txt
./datafile2.txt
./datafile.txt

The lack of an action in the first command means it is equivalent to:
find . \( -name tmp -prune -o -name \*.txt \) -print

This causes tmp to be included in the output.  However for the second find command the normal rules of Boolean operator precedence apply, so the pruned directory does not appear in the output.




The find command can be amazingly useful.  See the man page to learn all the criteria and actions you can use.

jeudi 20 mai 2010

Upgrade an Oracle Instance 10.2.0.3 to 10.2.0.4

To manually upgrade an Oracle Instance from 10.2.0.3 to 10.2.0.4 is pretty much a strait forward routine if one already have a 10.2.0.4 home on the machine.

Simply shutdown the instance and edit the /etc/oratab and the /usr/local/oraenv (if this has been changed from the original version) to make sure your instance now uses the new ORACLE_HOME.

Startup the instance in upgrade mode and run the pre-check script.

SQL>sqlplus / as sysdba
SQL>startup upgrade
SQL>spool upgrade_info.log
SQL>@?/rdbms/admin/utlu102i.sql
SQL>spool off


Check the log for possible problems and correct them if needed

Now run the upgrade script

SQL>sqlplus / as sysdba
SQL>spool patch.log
SQL>@?/rdbms/admin/catupgrd.sql
SQL>spool off


Check the log for errors. Most likely there will be an error ORA-00904 regarding OTHER_XML for the package body DBMS_SQLPA.

Also at the end of the log a scary message saying

Component Status Version HH:MM:SS
Oracle Database Server INVALID 10.2.0.4.0 00:06:56


Looks bad but in this case it’s not
A select COMP_NAME, VERSION, STATUS FROM SYS.DBA_REGISTRY; will show what really went wrong.
Oracle Database Packages and Types 10.2.0.4.0 INVALID

The fix for this problem is to drop the plan_table and re-create it (See MetaLink Note: 565600.1)

SQL>sqlplus / as sysdba
SQL>drop table plan_table;
SQL>@?/rdbms/admin/utlxplan
SQL>@?/rdbms/admin/prvtspao.plb
SQL>@?/rdbms/admin/utlrp.sql


This should fix the problem and the upgrade to 10.2.0.4 is done.

what is your 10 common linux commands?

What is your regular command you use? I am sure you must thinking of ls and cd. Yeah, they are common for every users, but how about the rest of them? I have construct a combos of commands to help you identify your top ten linux command.



The idea is simple, we gather info from history, lets look at the command combo’s now.
history | awk '{CMD[$2]++;count++;}END { for (a in CMD)print CMD[a] " " CMD[a]/count*100 "% " a;}' | grep -v "./" | column -c3 -s " " -t | sort -nr | nl |  head -n10

The command line above may looks complected, let me briefly explain it part by part. Awk is the most important part in line above, It simply store the command and count the occurrence in history ( column 2, $2), at the end of operation, it prints the result accordingly. Check out awk examples for more illustration on awk.

With the result output it then passes to grep -v to filter out “./”, because ./something is not a linux command. After that, arrange the data in columns and sort it numerically. List only the top ten with numbers using head and nl.

I have also created a script, so you can download it and run.

The results might vary based on your daily activities and Linux distribution. I have two laptop, one i use it at home and another one I use it at office. Ok, here is my 10 common linux commands.

Linux Distro: kubuntu 7.10

Desc: My personal laptop.

Activities: Online, Blogging, and leisure stuff.


1 97 19.4% ls
2 78 15.6% cd
3 46 9.2% vi
4 27 5.4% sudo
5 25 5% apt-get
6 15 3% ps
7 13 2.6% rm
8 13 2.6% ll
9 11 2.2% man
10 8 1.6% mv

Linux Distro: Ubuntu 7.04

Desc: Office Laptop

Activities: coding in c/c++, testing on the modules coded.

1 94 18.8% vi
2 83 16.6% g++
3 77 15.4% ls
4 69 13.8% cd
5 11 2.2% ps
6 10 2% man
7 8 1.6% sudo
8 6 1.2% apt-get
9 5 1% cp
10 4 0.8% vis

From the result above, you may realized that, I use vi and g++ more than cd and ls during working.

p.s. vis is alias vis=’sudo vi’

List command line history with timestamp

History is a common command for shell to list out all the executed commands. It is very useful when it comes to investigation on what commands was executed that tear down the server. With the help of last command, you be able to track the login time of particular user as well as the the duration of the time he/she stays login.
last
...
mysurface tty7 :0 Mon Oct 6 20:07 - down (00:00)
reboot system boot 2.6.24.4-64.fc8 Mon Oct 6 20:06 (00:00)
mysurface pts/8 10.168.28.44 Mon Oct 6 17:42 - down (01:58)
mysurface pts/7 :0.0 Mon Oct 6 17:41 - 19:40 (01:59)
mysurface pts/6 :0.0 Mon Oct 6 17:27 - 19:40 (02:13)
mysurface pts/5 :0.0 Mon Oct 6 17:27 - 19:40 (02:13)
mysurface pts/5 :0.0 Mon Oct 6 15:52 - 15:59 (00:07)
...

If the command line history could provides the date time of the commands being executed, that may really narrow down the scope of the user actions that cause the server malfunction. By default, history do not append with timestamp, but it is easy to configure it to display timestamp, you just need to set one environment variable HISTTIMEFORMAT.

HISTTIMEFORMAT takes format string of strftime. Check out the strftime manual to choose and construct the timestamp that suit your taste. My favorite is “%F %T “.
export HISTTIMEFORMAT="%F %T "

Execute history again and you will see the effect on the spot, bare in mind that the timestamp for command lines that executed at previous sessions may not valid, as the time was not tracked.
...
994 2008-10-16 02:27:40 exit
995 2008-10-16 01:12:20 iptables -nL
996 2008-10-16 01:47:46 vi .bash_profile
997 2008-10-16 01:47:55 history
998 2008-10-16 01:48:03 . .bash_profile
999 2008-10-16 01:48:04 history
1000 2008-10-16 01:48:09 exit
1001 2008-10-16 02:27:43 history
...

I would suggest you to put the export into ~/.bash_profile as well as /root/.bash_profile. In case you do not have .bash_profile, you can choose to put into ~/.bashrc.

mercredi 28 avril 2010

Configure Databse to Run in ARCHIVELOG Mode

Part I)

First of  all you must configure the following parameters related to database archiving:

You have two main options for define archive log destinations:

1) Flash Recovery Area (recommended)

In this part, first of all you must be set the located size of the FRA:

SQl> Alter system set DB_RECOVERY_FILE_DEST_SIZE=2G scope=both;

Note:

DB_RECOVERY_FILE_DEST_SIZE must be set before DB_RECOVERY_FILE_DEST.

then specifies the location of the FRA, in this part you have three options for define the location

  • Locally attached storage


SQL> alter system set DB_RECOVERY_FILE_DEST='/opt01/app/oracle/flash_recovery_area' scope=both;

  • Oracle's Cluster File System (OCFS)

  • Automatic Storage management (ASM)


SQL> alter system set DB_RECOVERY_FILE_DEST='+DATA' scope=both;

(DATA = ASM Disk volume)

Note: In a RAC database, all instances must have the same values for these parameters(DB_RECOVERY_FILE_DEST, DB_RECOVERY_FILE_DEST_SIZE).

2) ARCHIVELOG Destination Directories

  • Directories


SQL> alter system set LOG_ARCHIVE_DEST_1='LOCATION=/opt02/app/oracle/second_recovery_area mandatory' scope=both;

Note: If you want to use directories and FRA at the same time you must set  LOG_ARCHIVE_DEST_10 parameter as following command:

SQL> alter system set LOG_ARCHIVE_DEST_10='LOCATION=USE_DB_RECOVERY_FILE_DEST' scope=both;

  • Network Directories


SQL> alter system set LOG_ARCHIVE_DEST_2='LOCATION=/opt03/mnt_drive_z optional' scope=both;

  • Database Service Name (Standby Database/Data guard)


SQL> alter system set LOG_ARCHIVE_DEST_3='SERVICE=standby_db mandatory' scope=both;

Note: You can define up to 10 different archive log destinations and each location as a mandatory or optional location.

The other important parameters are:

LOG_ARCHIVE_MIN_SUCCEED_DEST

SQL> alter system set LOG_ARCHIVE_MIN_SUCCEED_DEST=2 scope=both;

Note: This parameter defines the minimum number of destinations that the log file must be copied to before it can be overwritten. This value should be greater than or equal to the number of MANDATORY destinations in LOG_ARCHIVE_DEST_n.

LOG_ARCHIVE_FORMAT

SQL> alter system set LOG_ARCHIVE_FORMAT ='LOG%s_%t_%r.ARC' scope=both;

Part II)

SQL> connect / as sysdba

Part III)

SQL> shutdown immediate

Part IV)

SQL> startup mount

Part V)

SQL>alter database archivelog;

Part VI)

SQL> alter database open;

Part VII)

To display the status of archiving:

SQL> select log_mode from v$database;

Or

SQL> archive log list;

mardi 27 avril 2010

Generate Workload CPU statistics using dbms_stats.gather_system_stats

In Oracle 9i and up, you can generate CPU cost information during a workload with a procedure of the dbms_stats package.



The dbms_stats.gather_system_stats packaged procedure can be used during a certain interval to measure the actual I/O and CPU usage during a workload, typically a days work.
SQL> exec dbms_stats.gather_system_stats('START')

PL/SQL procedure successfully completed.

SQL> -- days processing
SQL> exec dbms_stats.gather_system_stats('STOP')

PL/SQL procedure successfully completed.

Now, when you gathered workload CPU statistics, one can query the sys.aux_stats$ data dictionary table to see the actual values that will be used when generating your Sql plan:
select sname, pname, pval1
from sys.aux_stats$;

SNAME PNAME PVAL1
------------- --------- -------
SYSSTATS_INFO STATUS
SYSSTATS_INFO DSTART
SYSSTATS_INFO DSTOP
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 502.005
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM 7.618
SYSSTATS_MAIN MREADTIM 14.348
SYSSTATS_MAIN CPUSPEED 507
SYSSTATS_MAIN MBRC 6
SYSSTATS_MAIN MAXTHR 32768
SYSSTATS_MAIN SLAVETHR

13 rows selected.

CPUSPEEDNW, IOSEEKTIM, and IOTFRSPEED are noworkload statistics;

SREADTIM, MREADTIM, CPUSPEED, MBRC, MAXTHR, and SLAVETHR represent workload statistics.



When you have both workload and noworkload statistics, the optimizer will use workload statistics.

  • SREADTIM - single block read time (msec): the average time Oracle takes to read a single block

  • MREADTIM - multiblock read time (msec): the average time taken to read sequentially

  • MBRC - multiblock read count: the average amount of blocks read during multiblock sequential reads. This value is used instead of the db_ multiblock_read_count parameter during query optimization to compute costs for table and fast full index scans

  • MAXTHR - maximum I/O system throughput: is captured only if the database runs parallel queries

  • SLAVETHR - maximum slave I/O throughput: is captured only if the database runs parallel queries

ORA-02292: integrity constraint (%s.%s) violated - child record

You are trying to delete a record, but one or more child records were found, preventing the delete.
A (parent / child) relationship between 2 tables are defined by a foreign key constraint.
eg:

CREATE TABLE PARENT 
(
ID NUMBER PRIMARY KEY
);

Table created.

CREATE TABLE CHILD
(
ID NUMBER PRIMARY KEY,
P_ID NUMBER REFERENCES PARENT(ID)
);

Table created.

SQL> insert into parent values(1);

1 row created.

SQL> insert into parent values(2);

1 row created.

SQL> insert into child values(10, 1);

1 row created.

SQL> insert into child values(11, 1);

1 row created.

SQL> commit;

Commit complete.

SQL> delete from parent where id=2;

1 row deleted.

SQL> delete from parent where id=1;
delete from parent where id=1
*
ERROR at line 1:
ORA-02292: integrity constraint (MYOWNER.SYS_C008616) violated - child record
found


We could delete parent record 2 because there were no child records for this record.
Parent record 1 however has 2 child records, preventing the record from being deleted.

You can query the all_cons_columns view to see what child columns matches the parent columns:

select parentcols.owner||'.'||parent.table_name||'.'||parentcols.column_name||' matches to '||
childcols.owner||'.'||child.table_name||'.'||childcols.column_name
from
all_cons_columns childcols
, all_cons_columns parentcols
, all_constraints child
, all_constraints parent
where childcols.owner='MYOWNER' and childcols.constraint_name='SYS_C008616'
and childcols.owner=child.owner
and childcols.constraint_name=child.constraint_name
and child.r_owner=parent.owner
and child.r_constraint_name=parent.constraint_name
and parent.owner=parentcols.owner
and parent.constraint_name=parentcols.constraint_name
and childcols.position=parentcols.position
order by childcols.position;


MYOWNER.PARENT.ID matches to MYOWNER.CHILD.P_ID


Solution is to first delete the child records before deleting the parent record, or to specify the on cascade delete option on the foreign key:

SQL> drop table child;

Table dropped.

CREATE TABLE CHILD
(
ID NUMBER PRIMARY KEY,
P_ID NUMBER REFERENCES PARENT(ID) on delete cascade
);

Table created.


SQL> insert into child values(10, 1);

1 row created.

SQL> commit;

Commit complete.

SQL> delete from parent where id=1;

1 row deleted.

SQL> select * from child;

no rows selected

jeudi 22 avril 2010

How to Safely Remove a Data Guard Broker Configuration





















How to Safely Remove a Data Guard Broker Configuration
Doc ID:261336.1Type:REFERENCE
Modified Date:20-MAY-2009Status:PUBLISHED

PURPOSE
-------

How to safely remove a Data Guard Broker configuration.

SCOPE & APPLICATION
-------------------

This applies to all DBAs who wish to remove and desactive a
Data Guard Broker configuration.

Removing the Configuration
----------------------------

1) Remove the Data Guard Broker Configuration

a) Using the Commandline DGMGRL:

- Connect to the Primary Database
DGMGRL> connect sys/@
- Remove the Data Guard Broker Configuration
DGMGRL> remove configuration;

b) Using Data Guard Manager or Grid Control:

Follow the Wizard as mentioned in the Chapter
'Scenarios Using Oracle Enterprise Manager'
of the Data Guard Broker Guide of each Database Version.

2) On the primary database set dg_broker_start=false:

SQL> show parameter dg_broker

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
dg_broker_config_file1               string      ?/dbs/dr1@.dat
dg_broker_config_file2               string      ?/dbs/dr2@.dat
dg_broker_start                      boolean     TRUE
SQL>
SQL> alter system set dg_broker_start=false scope=both;

System altered.

SQL>  show parameter dg_broker

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
dg_broker_config_file1               string      ?/dbs/dr1@.dat
dg_broker_config_file2               string      ?/dbs/dr2@.dat
dg_broker_start                      boolean     FALSE
SQL>

3) On the primary disable any archive destinations that are archiving to the standby:

SQL> select dest_id,destination,status
2  from v$archive_dest
3  where target='STANDBY';

DEST_ID
----------
DESTINATION
--------------------------------------------------------------------------------
STATUS
---------
2
(DESCRIPTION=(ADDRESS_LIST = (ADDRESS=(PROTOCOL=tcp)(HOST=hasunclu2.us.oracle.co
m)(PORT=1521)))(CONNECT_DATA=(SID=DG920phy)(ORACLE_HOME=/u03/oracle/9.2.0)(SERVE
R=DEDICATED)))
VALID

SQL> alter system set log_archive_dest_state_2=defer scope=both;

System altered.

SQL>

4) on the standby set the dg_broker_start parameter to false:

SQL> show parameter dg_broker

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
dg_broker_config_file1               string      ?/dbs/dr1@.dat
dg_broker_config_file2               string      ?/dbs/dr2@.dat
dg_broker_start                      boolean     TRUE
SQL> alter system set dg_broker_start=false scope=both;

System altered.

SQL>

5) On both system rename the metadata files (if they exist):

oracle@hasunclu1:/u03/oracle/9.2.0/dbs> mv dr1DG920.dat dr1DG920.back
oracle@hasunclu1:/u03/oracle/9.2.0/dbs> mv dr2DG920.dat dr2DG920.back

RELATED DOCUMENTS
--------------------

Oracle® Data Guard Broker Guide

vendredi 2 avril 2010

Finding specific data in Oracle Tables

Imagine, there are a few tables in your schema and you want to find a specific value in all columns within these tables. Ideally, there would be an sql function like


select * from * where any(column) = 'value';


Unfortunately, there is no such function.

However, a PL/SQL function can be written that does that. The following function iterates over all character columns in all tables of the current schema and tries to find val in them.


create or replace function find_in_schema(val varchar2) 
return varchar2 is
v_old_table user_tab_columns.table_name%type;
v_where Varchar2(4000);
v_first_col boolean := true;
type rc is ref cursor;
c rc;
v_rowid varchar2(20);

begin
for r in (
select
t.*
from
user_tab_cols t, user_all_tables a
where t.table_name = a.table_name
and t.data_type like '%CHAR%'
order by t.table_name) loop

if v_old_table is null then
v_old_table := r.table_name;
end if;

if v_old_table <> r.table_name then
v_first_col := true;

-- dbms_output.put_line('searching ' || v_old_table);

open c for 'select rowid from "' || v_old_table || '" ' || v_where;

fetch c into v_rowid;
loop
exit when c%notfound;
dbms_output.put_line(' rowid: ' || v_rowid || ' in ' || v_old_table);
fetch c into v_rowid;
end loop;

v_old_table := r.table_name;
end if;

if v_first_col then
v_where := ' where ' || r.column_name || ' like ''%' || val || '%''';
v_first_col := false;
else
v_where := v_where || ' or ' || r.column_name || ' like ''%' || val || '%''';
end if;

end loop;
return 'Success';
end;
/


The function in action


Let's see the function in action. First, some tables are created:


create table test_find_1 (
a number,
b varchar2(10),
c varchar2(20),
d varchar2(30)
);

create table test_find_2 (
e number,
f varchar2(30),
g varchar2(20),
h varchar2(10)
);

create table test_find_3 (
i number,
j varchar2(15),
k varchar2(15),
l varchar2(15)
);


Then, the tables are filled:


insert into test_find_1 values (1, 'Orange' , 'Grape'     , 'Papaya' );
insert into test_find_1 values (2, 'Apple' , 'Pear' , 'Coconut');
insert into test_find_1 values (3, 'Mango' , 'Lime' , 'Banana' );

insert into test_find_2 values (1, 'Apricot', 'Kiwi' , 'Lemon' );
insert into test_find_2 values (2, 'Peach' , 'Dates' , 'Pear' );
insert into test_find_2 values (3, 'Lime' , 'Mango' , 'Grape' );

insert into test_find_3 values (1, 'Papaya' , 'Banana' , 'Mango' );
insert into test_find_3 values (2, 'Lime' , 'Plum' , 'Cherry' );
insert into test_find_3 values (3, 'Rhubarb', 'Pineapple' , 'Carrot' );


The function uses dbms_output. Therefore, we need to enable serveroutput in SQL*Plus.


set serveroutput on size 1000000 format wrapped


Executing the function:


select find_in_schema('Pear') from dual;


The output:


  rowid: AAACQNAAEAAAAHCAAB in TEST_FIND_1
rowid: AAACQOAAEAAAAHKAAB in TEST_FIND_2


Now, these rowids can be used to find the rows:


select * from test_find_1 where rowid = 'AAACQNAAEAAAAHCAAB';
select * from test_find_2 where rowid = 'AAACQOAAEAAAAHKAAB';


Thanks


Thanks to Steve Stowers who found a bug and improved the code otherwise.

jeudi 11 mars 2010

vi Quick Reference

This table is taken from Sun Microsystems' User's Guide: Getting Started (.

Starting vi

vi filename open or create file

vi +18 filename open to line 18

vi +/"mustard" filename open file to first occurence of ``mustard"

vi -r filename recover crashed file

view filename open file read-only



Cursor Commands

h move left

j move down

k move up

l move down

w move right one word

W move right one word (past punctuation)

b move left one word

B move left one word (past punctuation)

Return move down one line

Back Space move left one character

Space Bar move right one character

H move to top screen

M move to middle of screen

L move to bottom of screen

Ctl-F scroll forward one screen

Ctrl-D scroll forward one-half screen

Ctrl-B scroll backward one screen

Ctrl-U scroll backward one-half screen



Insterting Characters and Lines

a insert characters to right of cursor

A insert characters to right of cursor, at end of line

i insert characters to left of cursor

I insert characters to left of cursor, at beginning of line

o insert line below cursor

O insert line above cursor



Changing Text

cw change word (or part of word right of cursor)

cc change line

C change part of line to right of cursor

s substitute string for character under cursor

r replace character under cursor with one other character

r-Return break line

J join current line and line below

xp transpose character at cursor & character to right

  change case of letter (upper or lower)

u undo previous command

U undo all changes to line

:u undo previous last-line command



Deleting Text

x delete character

dw delete word (or part of word to right of cursor)

dd delete line

D delete part of line to right of cursor

:5,10 d delete lines 5-10



Copying and Moving Text

yy yank or copy line

Y yank or copy line

dd delete line

p put yanked or deleted line below current line

P (upper case) put yanked or deleted line above current line

:1,2 co 3 copy lines 1-2 and put after line 3

:4,5 m 6 move lines 4-5 and put after line 6



Setting Line Numbers

:set nu show line numbers

:set nonu hide line numbers



Finding a Line

G go to last line of file

21G go to line 21



Searching and Replacing

/string/ search for string

$?$string$?$ search backwards for string

n find next (or previous) occurence of string

:g/search/s//replace/gc search and replace, consult at each occurence



Clearing the Screen

Ctrl-L clear scrambled screen



Inserting a File Into a File

:r filename insert (read) filename after cursor

:34 r filename insert filename after line 34



Saving and Quitting

:w save changes (write buffer)

:w filename write buffer to filename

:wq save changes and quit vi

ZZ save changes and quit vi

:q! quit without saving changes

jeudi 4 mars 2010

Tuning Oracle's Buffer Cache

Oracle maintains its own buffer cache inside the system global area (SGA) for each instance. A properly sized buffer cache can usually yield a cache hit ratio over 90%, meaning that nine requests out of ten are satisfied without going to disk.

If a buffer cache is too small, the cache hit ratio will be small and more physical disk I/O will result. If a buffer cache is too big, then parts of the buffer cache will be under-utilized and memory resources will be wasted.

Checking The Cache Hit Ratio

Oracle maintains statistics of buffer cache hits and misses. The following query will show you the overall buffer cache hit ratio for the entire instance since it was started:
     SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
FROM v$sysstat P1, v$sysstat P2, v$sysstat P3
WHERE P1.name = 'db block gets'
AND P2.name = 'consistent gets'
AND P3.name = 'physical reads'

You can also see the buffer cache hit ratio for one specific session since that session started:
     SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
FROM v$sesstat P1, v$statname N1, v$sesstat P2, v$statname N2,
v$sesstat P3, v$statname N3
WHERE N1.name = 'db block gets'
AND P1.statistic# = N1.statistic#
AND P1.sid = <enter SID of session here>
AND N2.name = 'consistent gets'
AND P2.statistic# = N2.statistic#
AND P2.sid = P1.sid
AND N3.name = 'physical reads'
AND P3.statistic# = N3.statistic#
AND P3.sid = P1.sid

You can also measure the buffer cache hit ratio between time X and time Y by collecting statistics at times X and Y and computing the deltas.

Adjusting The Size Of The Buffer Cache

The db_block_buffers parameter in the parameter file determines the size of the buffer cache for the instance. The size of the buffer cache (in bytes) is equal to the value of the db_block_buffers parameter multiplied by the data block size.

You can change the size of the buffer cache by editing the db_block_buffers parameter in the parameter file and restarting the instance.

Determining If The Buffer Cache Should Be Enlarged

If you set the db_block_lru_extended_statistics parameter to a positive number in the parameter file for an instance and restart the instance, Oracle will populate a dynamic performance view called v$recent_bucket. This view will contain the same number of rows as the setting of the db_block_lru_extended_statistics parameter. Each row will indicate how many additional buffer cache hits there might have been if the buffer cache were that much bigger.

For example, if you set db_block_lru_extended_statistics to 1000 and restart the instance, you can see how the buffer cache hit ratio would have improved if the buffer cache were one buffer bigger, two buffers bigger, and so on up to 1000 buffers bigger than its current size. Following is a query you can use, along with a sample result:
     SELECT   250 * TRUNC (rownum / 250) + 1 || ' to ' || 
250 * (TRUNC (rownum / 250) + 1) "Interval",
SUM (count) "Buffer Cache Hits"
FROM v$recent_bucket
GROUP BY TRUNC (rownum / 250)

Interval Buffer Cache Hits
--------------- --------------------
1 to 250 16083
251 to 500 11422
501 to 750 683
751 to 1000 177

This result set shows that enlarging the buffer cache by 250 buffers would have resulted in 16,083 more hits. If there were about 30,000 hits in the buffer cache at the time this query was performed, then it would appear that adding 500 buffers to the buffer cache might be worthwhile. Adding more than 500 buffers might lead to under-utilized buffers and therefore wasted memory.

There is overhead involved in collecting extended LRU statistics. Therefore you should set the db_block_lru_extended_ statistics parameter back to zero as soon as your analysis is complete.

In Oracle7, the v$recent_bucket view was named X$KCBRBH. Only the SYS user can query X$KCBRBH. Also note that in X$KCBRBH the columns are called indx and count, instead of rownum and count.

Determining If The Buffer Cache Is Bigger Than Necessary

If you set the db_block_lru_statistics parameter to true in the parameter file for an instance and restart the instance, Oracle will populate a dynamic performance view called v$current_bucket. This view will contain one row for each buffer in the buffer cache, and each row will indicate how many of the overall cache hits have been attributable to that particular buffer.

By querying v$current_bucket with a GROUP BY clause, you can get an idea of how well the buffer cache would perform if it were smaller. Following is a query you can use, along with a sample result:
     SELECT   1000 * TRUNC (rownum / 1000) + 1 || ' to ' || 
1000 * (TRUNC (rownum / 1000) + 1) "Interval",
SUM (count) "Buffer Cache Hits"
FROM v$current_bucket
WHERE rownum > 0
GROUP BY TRUNC (rownum / 1000)

Interval Buffer Cache Hits
------------ -----------------
1 to 1000 668415
1001 to 2000 281760
2001 to 3000 166940
3001 to 4000 14770
4001 to 5000 7030
5001 to 6000 959

This result set shows that the first 3000 buffers are responsible for over 98% of the hits in the buffer cache. This suggests that the buffer cache would be almost as effective if it were half the size; memory is being wasted on an oversized buffer cache.

There is overhead involved in collecting LRU statistics. Therefore you should set the db_block_lru_statistics parameter back to false as soon as your analysis is complete.

In Oracle7, the v$current_bucket view was named X$KCBCBH. Only the SYS user can query X$KCBCBH. Also note that in X$KCBCBH the columns are called indx and count, instead of rownum and count.

Full Table Scans

When Oracle performs a full table scan of a large table, the blocks are read into the buffer cache but placed at the least recently used end of the LRU list. This causes the blocks to be aged out quickly, and prevents one large full table scan from wiping out the entire buffer cache.

Full table scans of large tables usually result in physical disk reads and a lower buffer cache hit ratio. You can get an idea of full table scan activity at the data file level by querying v$filestat and joining to SYS.dba_data_files. Following is a query you can use and sample results:
     SELECT   A.file_name, B.phyrds, B.phyblkrd
FROM SYS.dba_data_files A, v$filestat B
WHERE B.file# = A.file_id
ORDER BY A.file_id

FILE_NAME PHYRDS PHYBLKRD
-------------------------------- ---------- ----------
/u01/oradata/PROD/system01.dbf 92832 130721
/u02/oradata/PROD/temp01.dbf 1136 7825
/u01/oradata/PROD/tools01.dbf 7994 8002
/u01/oradata/PROD/users01.dbf 214 214
/u03/oradata/PROD/rbs01.dbf 20518 20518
/u04/oradata/PROD/data01.dbf 593336 9441037
/u05/oradata/PROD/data02.dbf 4638037 4703454
/u06/oradata/PROD/index01.dbf 1007638 1007638
/u07/oradata/PROD/index02.dbf 1408270 1408270

PHYRDS shows the number of reads from the data file since the instance was started. PHYBLKRD shows the actual number of data blocks read. Usually blocks are requested one at a time. However, Oracle requests blocks in batches when performing full table scans. (The db_file_multiblock_read_count parameter controls this batch size.)

In the sample result set above, there appears to be quite a bit of full table scan activity in the data01.dbf data file, since 593,336 read requests have resulted in 9,441,037 actual blocks read.

Spotting I/O Intensive SQL Statements

The v$sqlarea dynamic performance view contains one row for each SQL statement currently in the shared SQL area of the SGA for the instance. v$sqlarea shows the first 1000 bytes of each SQL statement, along with various statistics. Following is a query you can use:
     SELECT   executions, buffer_gets, disk_reads, 
first_load_time, sql_text
FROM v$sqlarea
ORDER BY disk_reads

EXECUTIONS indicates the number of times the SQL statement has been executed since it entered the shared SQL area. BUFFER_GETS indicates the collective number of logical reads issued by all executions of the statement. DISK_READS shows the collective number of physical reads issued by all executions of the statement. (A logical read is a read that resulted in a cache hit or a physical disk read. A physical read is a read that resulted in a physical disk read.)

You can review the results of this query to find SQL statements that perform lots of reads, both logical and physical. Consider how many times a SQL statement has been executed when evaluating the number of reads.

Conclusion

This brief document gives you the basic information you need in order to optimize the buffer cache size for your Oracle database. Also, you can zero in on SQL statements that cause a lot of I/O, and data files that experience a lot of full table scans.

lundi 1 février 2010

RedHat5 rpmdb-redhat

Place all the rpm packages in one local sub-directory (lets say RHEL5) which
does not have and sub-directorires and do;

rpm -ivh createrepo*.rpm
createrepo /RHEL5

Then in the /etc/yum.repos.d directory place a file named something like
RedHat-media.repo which reads like;

[rh5media]
name=Red Hat-$releasever - Media
baseurl=file:///RHEL5
gpgcheck=0
enabled=0

Then you can type stuff like:

yum install nom   - -enablerepo=rh5media

Adding a new disk to a VMWare Virtual Machine in Linux

I’ve been using VMWare for a while now and I always get asked some common questions about it. One of those is how to add a new virtual disk to a Linux virtual machine. So in response to that, here are the steps to adding a new SCSI based virtual disk to a CentOS Linux virtual machine. The steps for adding a disk to a Windows machine is very much the same except you would use the Disk Management utility from the Control Panel.

Step 1: Open virtual machine settings
Select your virtual machine, as you can see from the photo I selected the Infrastructure virtual machine. Next press the “Edit virtual machine settings’ to open the Virtual Machine Settings dialog.


Step 2: Add new hardware
From the “Virtual Machine Settings” dialog select the “Add…” button at the bottom of the screen. From this dialog you can also modify how much memory you dedicate to the machine when it boots.
Virtual Machine Settings Dialog - /> Select the Add… button” src=”http://www.matttopper.com/images/blog/adding_disk_to_vmware/2.jpg” /></p> <p>Step 3: Start hardware wizard<br /> Next we will walk through the “Add Hardware Wizard” the process makes it very simple to add additional hardware to a predefined virtual machine.  From this screen we want to select the “Next >” button.<br /> <img alt=

Step 4: Select new hard disk
From this screen we can see the many types of hardware we can add to a virtual machine. You can emulate just about any piece of hardware that one can expect in a modern operating system. It definitely makes testing with different configurations and devices much easier. For our example we want to select “Hard Disk” and then select the “Next >” button.
Select Hard Disk from the Add Hardware Wizard

Step 5: Create the virtual disk
In the next screen we see the three options for adding a new disk. We can “Create a new virtual disk”, this will create a brand new disk on the guest operating system. The second option, “Use an existing virtual disk”, allows you to mount a disk from another virtual machine. I like to do this with my “source” drive. I have one virtual disk that I’ve made that has all the Oracle and Linux CDs on it, that way I can just mount it to the machine I need when I have to do a new install instead of copying the binaries I need across disks, its definitely a big time saver. The last option is to “Use a physical disk”, this allows you to mount a local physical disk to the operating system. This option is akin to NFS mounting a drive to a virtual machine. To add a new disk we select the “Create a new virtual disk” option and select the “Next >” button.
Create a New Virtual Disk from the Add Hardware Wizard

Step 6: Select type of disk
Next we want to select the type of disk. I’ve been using VMWare for a long time and agree that the recommended Virtual Disk Type should be SCSI. I don’t know why, but I’ve had much better success with the SCSI virtual disks than the IDE ones. So in this step we want to select “SCSI (Recommended)” and the “Next >” button.
Select Disk Type from Add Hardware Wizard

Step 7: Set disk size and options
Now we want to set the size of the disk we are creating. One of the nice features of VMWare is that you don’t have to allocate all of the disk when you create it. So if you create a 40 GB disk it doesn’t have to take it all right away, the disk will grow as your virtual machine needs it. I will say this is a big performance hit you take when the disk has to extend, but for most applications its OK. Also, I will warn that if the virtual disk grows and there is no physical disk left on the host operating system you will see a catastrophic failure and in most cases both the host and guest operating systems lock up and become unusable. (Don’t say I didn’t warn you) Lastly, you can split the files into 2GB sizes, while this isn’t necessary, it just makes all the disks much easier to manage and move around. For this step we want to set our disk size (12 GB in this case), I chose not to allocate the disk space right now (the machine has a 300 GB drive and has only 20 GB on it) and Split disk into 2 GB files.
Specify Disk Capacity from the Add Hardware Wizard

Step 8: Name the disk file
This is actually pretty simple in that you decide what you want to physically call the disk and where to put it. .vmdk is the extension for VMWare virtual disks. After we name the disk we can select the “Finish” button which adds the disk to the virtual machine.
Name the Virtual Disk File from the Add Hardware Wizard

Step 9: Ensure new disk exists
So now we can see that the new disk has been added to the “Virtual Machine Settings” within the selected virtual machine. From here the disk acts just like it would if you added a new disk to a standalone server. So we select the “OK” button to continue.
Review Virtual Machine Settings for the newly created disk

Step 10: Boot the virtual machine
From here we just start the virtual machine like we would normally, either by selecting the button on the toolbar or selecting the “Start this virtual machine” link.
Start the Virtual Machine

Step 11: Virtual machine start up
The machine boots normally as it would any other time.
Booting Virtual Machine...

Step 12: Create the Partition
After we’ve logged in and accessed a terminal window as root (or another user with root/sudo privs) we first want to run fdisk on the newly created drive. In Linux the first SCSI drive is sda, the second sdb, the third sdc, etc. since this was the second SCSI drive we added to the system, the device is known as /dev/sdb
The first command we want to run is fdisk /dev/sdb (NOTE: Thanks to everyone that caught my typo here) this utility works very much like the DOS utility of the old days and allows you to create and manage partitions. To create a new partition we enter the command n to create a new partition. This is going to be a primary partition p, and the first partition number 1. Because I want this disk to consume the full 12 GB I specified earlier we start at the first cylinder and end it at the last cylinder. We then want to write the partition table with the new partition we have just created so we enter the command w which writes the new table and exits fdisk.
Create new partition with fdisk

Step 13: Format the partition
Now that we’ve create the partition, we now want to format the first with the new file system. I’ve decided to use ext3 filesystem for this disk, ext3 provides all the features of the classic ext2 file system plus journaling which helps to prevent disk corruption in the event of an improper shutdown and speeds up the recovery process. For a good overview of Linux standard file systems check out this article: http://linux.org.mt/article/filesystems So, to format the new partition we enter the command mkfs -t ext3 /dev/sdb1. This command makes a new files system with the type t ext3 on the /dev/sdb1 partition, this is the first partition on the sdb disk.
Create new filesystem in the virtual machine

Step 14: Create the mount point
Determine where you want to add the new virtual disk you’ve created. I like to create a partition specifically for all the software I install after the basic Linux install called /software to do that we run mkdir /software, just a simple make directory command. Once that is complete we then want to mount the newly created partition. Because we haven’t added the partition to the /etc/fstab yet we have to mount it manually. To do that we run mount -t ext3 /dev/sdb1 /software. To break down this command we run mount with the ext3 filesystem type, the partition /dev/sdb1 to the directory /software. Pretty simple and straight forward. To check that the partition is properly mounted we run df -k which shows us the mounted partitions and the amount of available space.
Create new mount point and mount the new drive

Step 15: Open the fstab file
The fstab file holds all of the used disks and partitions, and determines how they are supposed to be used by the operating system. So we edit the file to add the newly created partition

http://www.matttopper.com/images/blog/adding_disk_to_vmware/15.jpg

Step 16: Modify the fstab for the new partition
After we open the fstab file in the previous step we add the following line:
/dev/sdb1 /software ext3 defaults 1 1
The first column is the partition name, the second is the default mount point, the third is the filesystem type. The fourth is the mount options, in this case I used default which mounts the drive rw, suid, dev, exec, auto, nouser and asynchronous. The 5th and 6th options are for the dump and fsck options. If dump is set to 1 the filesystem is marked to be backed up, if you are going to have sensitive material on the drive its a good idea to set it to 1. If fsck is set to greater than 1, then the operating system uses the number to determine in what order fsck should be run during start up. If it is set to 0 it will be ignored such as in the case of a cdrom drive since its a solid state disk. For more information on the fstab file check out this article: http://www.tuxfiles.org/linuxhelp/fstab.html
Lastly, we write and quit the file with the :wq command.
Editing of the fstab

So now that the fstab has been written the drive will be mounted and unmounted when the machine is either started or shutdown. So there you have it, the quick and dirty process for adding a brand new disk to a virtual machine. Until next time…