This section provides information on tuning your Oracle server to optimize the behavior of your ZENworks system:
Query the V$SGAINFO dynamic performance view to show more details about memory usage:
SELECT * FROM V$SGAINFO;
Connect to Oracle Enterprise Manager as SYSDBA and navigate to.
Chooseto verify if is enabled, the total (and maximum) memory size configured, and the allocation history graph.
Clickto see the graph. This enables you to choose the right value for the total memory size.
To avoid paging and swapping at the operating system level, do not exceed the limit of available physical memory when using Oracle Enterprise Manager or the ALTER SYSTEM command.
To obtain maximum performance from the Oracle database, a better option is to keep all the required memory structures in the physical memory, if enough memory is available. In order to do this, it is advisable to keep the SGA limit below the available physical memory.
On the Linux Platform, you can use hugepages to obtain a page size of 2 MB instead of the older 4 KB. The memory space used by hugepages is locked and cannot be paged out.
For more information, see http://docs.oracle.com/cd/E11882_01/server.112/e16638/memory.htm#PFGRF014.
Library Cache is part of the Shared Pool, inside the System Global Area. In this section, we will see how to inspect the use of the Library Cache, and how to tune it to obtain the best performance from the database.
To tune the Library Cache:
Query the V$LIBRARYCACHE dynamic performance view:
SELECT NAMESPACE, GETS, GETHITRATIO, PINS, PINHITRATIO,RELOADS, INVALIDATIONS FROM V$LIBRARYCACHE;
Calculate the Library Cache Hit ratio:
SELECT SUM(PINS - RELOADS)*100/SUM(PINS) AS "Hit Ratio" FROM V$LIBRARYCACHE;
The Library Cache Hit Ratio is an important parameter to evaluate the use of Library Cache.The result should be around 99.9 percent.
The Library Cache stores parsed SQL statements, execution plans, PL/SQL blocks, and Java classes, ready to be executed. The application code shared in the Library Cache can be easily reused by different database sessions. The reuse of a piece of code already in the cache is called a Library Cache Hit. A Library Cache Miss occurs when the execution of a piece of code cannot find the already parsed code in the Library Cache.
The Library Cache Hit is also called a soft parse; the Library Cache Miss is called a hard parse.
The main reasons to tune the Library Cache are to minimize misses (reparsing) and avoid invalidations.
To minimize misses:
Increase the size of the SHARED_POOL_SIZE parameter.
Change the CURSOR_SHARING parameter to determine when SQL statements are considered identical, therefore sharing the corresponding execution plan in the Library Cache.
For more information, see the following links:
Inspect the shared pool reserved memory:
SELECT * FROM V$SHARED_POOL_RESERVED;
Inspect the data dictionary cache statistics:
SELECT PARAMETER, GETS, GETMISSES, (GETS-GETMISSES)*100/GETS AS "Hit Ratio", MODIFICATIONS, FLUSHES FROM V$ROWCACHE WHERE GETS > 0;
Querying the V$SHARED_POOL_RESERVED dynamic performance view, inspect the statistics about the use of reserved space in the Shared Pool. The goal is to minimize the REQUEST_MISSES and REQUEST_FAILURES, similar to the Library Cache. If the number of failed requests is increasing, we need to expand the Reserved Pool (and probably also the Shared Pool).
To increase the shared pool size:
To size the Reserved Pool, use the SHARED_POOL_RESERVED_SIZE initialization parameter. The value of this parameter cannot exceed 50 percent of the SHARED_POOL_SIZE parameter.
You can use the V$SHARED_POOL_ADVICE dynamic performance view to obtain information about estimated parse time in the shared pool for different shared pool sizes, with a range from 10 percent to 200 percent of the current shared pool size, in equal intervals.
The column ESTD_LC_TIME_SAVED indicates the estimated elapsed parse time saved in seconds, while the ESTD_LC_LOAD_TIME column contains estimated elapsed time in seconds for parsing.
For more information, see http://docs.oracle.com/cd/E11882_01/server.112/e16638/memory.htm#PFGRF94288.
SELECT SUM(GETS—GETMISSES) / SUM(GETS) AS "Hit Ratio" FROM V$ROWCACHE;
Keep this value above 85 percent.
The first time, the objects need to be loaded into the cache, so there can never be a 100 percent value for the Hit Ratio.
The size of the Dictionary Cache cannot be changed; It is a part of the Shared Pool and is automatically maintained by the database. The database uses an algorithm that prefers to keep dictionary data rather than library cache data in the shared pool, because the performance benefits achieved by using the former approach are more significant. You can only size the Shared Pool using the SHARED_POOL_SIZE initialization parameter
For more information, see http://docs.oracle.com/cd/E11882_01/server.112/e16638/memory.htm#PFGRF94288.
The PGA is used to store real values of bind variables, sort areas, and cursor state information. In a dedicated server environment, this area is in private user memory.
In a shared-server environment, the session stack space remains in the PGA, while session data and cursor state are moved into the shared pool.
Parameters related to cursor management:
OPEN_CURSORS defines the number of concurrent cursors that a user process can use to reference private SQL areas. Increasing the value associated to this parameter allows the user to use more cursors simultaneously, but the memory consumption will be greater.
SESSION_CACHED_CURSORS allows defining the number of session cursors cached. Setting this parameter to a value greater than zero results in a performance gain, where there are repeated parse calls to the same SQL statements. Closed cursors will be cached within the session, ready to be reused.
CURSOR_SHARING allows you to define whether the cursors are shared only when they match exactly (using EXACT) or also in other situations (using FORCE and SIMILAR).
For more information, see http://docs.oracle.com/cd/E11882_01/server.112/e16638/memory.htm#PFGRF01401.
Buffer Cache is used to store the data read from disk onto the database blocks. Due to the I/O operation, which is slower on disk than on memory, it is preferable that the database makes a few I/O operations on-disk. This result is achievable when most of the requests are satisfied by the data already in the Buffer Cache.
The Buffer Cache operates using an LRU list in order to keep track of the database blocks most often used and a dirty list. The dirty list stores the modified blocks that are required to be written to the disks.
The main use of the LRU list is to add blocks to the LRU end using a full table scan, while the normal operations add blocks to the MRU end of the list, and therefore they are quickly replaced by the blocks required for subsequent operations.
To tune the Buffer Cache:
Query the statistics related to the Buffer Cache:
SELECT NAME, VALUE FROM V$SYSSTAT WHERE NAME LIKE '%buffer%';
Estimate the performance with various sizes for the Buffer Cache and different database block sizes:
SELECT BLOCK_SIZE, SIZE_FOR_ESTIMATE, BUFFERS_FOR_ESTIMATE, ESTD_PHYSICAL_READS FROM V$DB_CACHE_ADVICE ORDER BY BLOCK_SIZE, SIZE_FOR_ESTIMATE;
Evaluate the Buffer Cache Hit Ratio from statistics:
SELECT PR.VALUE AS "phy. reads", PRD.VALUE AS "phy. reads direct", PRDL.VALUE AS "phy. reads direct (lob)", SLR.VALUE AS "session logical reads", 1 - (PR.VALUE - PRD.VALUE - PRDL.VALUE) / SLR.VALUE AS "hit ratio" FROM V$SYSSTAT PR, V$SYSSTAT PRD, V$SYSSTAT PRDL, V$SYSSTAT SLR WHERE PR.NAME = 'physical reads' AND PRD.NAME = 'physical reads direct' AND PRDL.NAME = 'physical reads direct (lob)' AND SLR.NAME = 'session logical reads';
Evaluate the statistics and Hit Ratio for various Buffer Pools:
SELECT NAME, PHYSICAL_READS AS "physical reads", DB_BLOCK_GETS AS "DB block gets", CONSISTENT_GETS AS "consistent gets", 1 - (PHYSICAL_READS / (DB_BLOCK_GETS + CONSISTENT_GETS)) AS "hit ratio" FROM V$BUFFER_POOL_STATISTICS WHERE DB_BLOCK_GETS + CONSISTENT_GETS > 0;
For more information, see http://docs.oracle.com/cd/E11882_01/server.112/e16638/memory.htm#PFGRF94264.
Almost everything in ZENworks is stored in a database. If you do not create frequent backups of your database, the entire health of the zone is at risk. Therefore, ensure that you design a strategy for database backup and recovery.
Additionally, backups of a database are useful for routine administrative purposes such as copying a database from one server to another, setting up “AlwaysOn Availability Groups” or database mirroring, and archiving. The following backup types are supported by Oracle:
The tools and methods below can be used to create backups:
Export/Import: Exports are "logical" database backups as they extract logical definitions and data from the database to a file.
The following is an example of the schema export and import syntax:
expdp zenadmin/novell@zen11sp2 schemas=zenadmin directory=TEST_DIR dumpfile=zenadmin.dmp logfile=expdpzenadmin.log
impdp zenadmin/novell@zen11sp2 schemas=zenadmin directory=TEST_DIR dumpfile=zenadmin.dmp logfile=impdpzenadmin.log
Cold or Offline Backups: Shut the database down and backup up all the data, log, and control files.
Hot or Online Backups. If the database is available and in ARCHIVELOG mode, set the tablespaces into backup mode and backup the files. Also remember to backup the control files and archived redo log files.
RMAN Backups: When the database is offline or online, use the rman utility to backup the database.
See the links below:
In Oracle, DML operations will not release free space from the table below the High Water Mark and it will increase table fragmentation. Fragmentation causes Oracle optimizer to ignore the index full table scan and, in turn, reduce the performance of database.
To identify the fragmentation compare the actual data size and table size.
Table size (with fragmentation):
select table_name,round((blocks*8),2)||'kb' "size" from user_tables group by table_name;
After capturing the table statistics, calculate the actual data in the table:
select table_name, round((num_rows*avg_row_len/1024),2)||'kb' "size" from user_tables group by table_name;
Oracle provides many methods for defragmenting a table. Any process that copies all the table rows can be used to defragment a table:
Alter table <tablename> shrink space compact
Deallocate unused space
CTAS (or "alter table xxx move"): This will defragment the table by copying the rows into their pristine state. dbms_redefinition can also be used to defragment an Oracle table.
Data Pump export/import: The table is dropped and re-created to make it unfragmented.
NOTE:Sometimes, fragmentation is caused by an incorrect setting for the PCTFREE parameter.
For more information, see http://www.dba-oracle.com/t_reclaiming_disk_space.htm.
The Oracle database provides several tracing tools that can help you monitor and analyze applications running against an Oracle database.
End-to-end application tracing can identify the source of an excessive workload such as a high load SQL statement, by client identifier, service, module, action, session, instance, or an entire database. This isolates the problem to a specific user, service, session, or application component.
Oracle Database provides the trcsess command line utility that consolidates tracing information based on specific criteria.
The SQL Trace facility and TKPROF are two basic performance diagnostic tools that can help you monitor applications running against the Oracle database.
For more information, see http://docs.oracle.com/cd/E25054_01/server.1111/e16638/sqltrace.htm.