Wednesday, 14 February 2018

Hash Join Overflow Costing #4 - Temporary Space + PGA Use

Now I want to look at the value reported for "Temporary Space" (TempSpc) and how to check whether the hash table is close to fitting in memory within the PGA or not i.e. just a bit too big or way too big.

Temporary Space

It seems that this is simply the size of the hash table itself, which is in turn the size of the Build data set plus an additional 12 bytes per row. Which means that we can use part of the cost formula posted earlier for this to estimate the size in KiloBytes:
  • Build KBytes + (12 * Build Rows / 1000)
The evidence I have for this is:
  • The "TempSpc" value only changes when the Build size changes - never when the Probe size changes
  • When additional columns are added to Build data set in the SELECT the "TempSpc" value increases by the same amount as the Build size increases (for the same number of rows)
  • When the number of rows in the Build data set increases, the "TempSpc" value increases by an additional 11.76 bytes per additional row
    • Which is almost 12 bytes per row, as mentioned before as overhead in the hash table
You could of course "refine" the above formula and use "11.76" instead of "12" and that would give you a value for the hash table that is much closer to what is reported for the "TempSpc" in the execution plan. But I think that using 12 is good enough for only an estimate.

Hash Table Size vs PGA Size

I mentioned in the first post that the Optimizer uses the value of the internal initialization parameter "_smm_min_size" for the memory it can use as the work area for the hash table within the PGA. Thus if the size of the hash table is larger than this the Optimizer will cost the Hash Join as an overflow to disk operation. Using the formula posted above for the Temporary Space we can now estimate the size of the hash table needed and compare it to this minimum work area size value.

The notes I have on "_smm_min_size" say that it will be 0.1% of the PGA when the size of the PGA is greater than 128 MB, otherwise it will be 128 KB as a minimum. Assuming your PGA is larger than 128 MB then you can calculate what this is yourself. Note also that its value is reported in 10053 trace files along with all other initialization parameters, so you can directly verify its value if you want to.

At run time, though, the PGA may not be heavily used and may have a lot of unused memory available. An executing query can allocate and use more than the minimum work area value at run time for execution plan operations, up to a limit specified by the "_smm_max_size" initialization parameter. For PGAs larger than 1 GB this is set to be 10% of the PGA size. For smaller PGAs it is 100 MB, or 20% when the PGA is less than 500 MB.

Knowing how large the hash table needed will be, lets you compare it to the maximum possible work area size in your PGA and see what the likelihood is of the hash table being able to fit within the PGA at run time. As the ratio between the minimum and maximum possible work area sizes is 100 (0.1% to 10%) this is a large range and a hash table may have a reasonable chance of fitting in the PGA at run time, depending on how active your system is and the other queries running.

Equally it lets you consider what the benefit might be from increasing the size of the PGA significantly if you have a critical query involving a large hash join operation that is overflowing to disk. That said, there are some ways of rewriting queries that can minimise the size of the data being pulled through into the hash table used by the hash join, and so potentially get a smaller hash table to fit into memory in the PGA. But this depends on multiple factors, mainly being the number of columns of data coming through in the Build data set. If this is only a few columns then there won't anything that can be saved by such a query rewrite.

Tuesday, 6 February 2018

Hash Join Overflow Costing #3 - Simpler Formula

So far I have offered a formula for how a Hash Join that overflows to disk is costed and confirmed that this is only costed in terms of single block disk operations. While that formula produces very accurate results (less than 1% difference to the reported cost for the Hash Join operation) it requires you to obtain size information on each individual column being retrieved from both the Build and Probe data sets. And this could be quite tedious when there are many columns involved, or you don't have an easy way to work out the source columns involved in each data set. There is a simplification we can make to eliminate this detailed per column information and just use the information reported in the execution plan itself.

All of the columns being retrieved for the Build and Probe data sets are actually included within the "Bytes" value reported in the execution plan for each data set. And this value includes both the individual column data and any per-column overhead as well. What it does not include is the extra 12 bytes per row overhead needed in the hash table itself. We can approximate the size of this second part of the data by using the "Rows" value reported in the execution plan.

Examining the "Bytes" and "Rows" values in the execution plans for my test queries (see first post for details of these queries), I can see that the increase in the Hash Join cost is about 0.0485 per KB of increase in either data set (Build or Probe) for the same number of rows. Previously I determined that there was a 12 byte overhead within the hash table per data row expected.

This produces a revised Hash Join cost formula using only values from the execution plan of:
  • ( (Build KBytes + (12 * Build Rows / 1000) ) + (Probe KBytes + (12 * Probe Rows / 1000) ) ) * 0.0485
Note that the "Bytes" values used need to be in "K" units, whereas the "Rows" is not and so is divided by 1000. Checking this formula against the actual Hash Join costs of the test queries I ran I can see that it has an error of about 1% i.e. it is not as accurate as the previous formula, but is still accurate enough I believe.

Lets check how close this from one of the test queries. Here is the execution plan produced:
| Id  | Operation     | Name   | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT    |      |      |      |      | 1266 (100)|          |
|   1 |  SORT AGGREGATE     |      |    1 |   30 |      |           |          |
|*  2 |   HASH JOIN     |      | 50000 | 1464K| 1712K| 1266   (1)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| HJ50K  | 50000 | 1123K|      |  365   (1)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| HJ100K |  100K|  683K|      |  728   (1)| 00:00:01 |
We can see:
  • The Build and Probe data access cost is 365 + 728 = 1093
  • The cost of the Hash Join operation itself is 1266 - 1093 = 173
  • The calculated approximate cost is (1123 + (12 * 50) + 683 + (12 * 100)) * 0.0485
    • = (1123 + 600 + 683 + 1200) * 0.0485
    • = 3606 * 0.0485 
    • = 174.891
  • The difference is +1.891, which is 1.1% of the actual cost
This formula can therefore be a useful check against the cost reported when a Hash Join operation overflows to disk, and for determining which is the biggest cost factor i.e. the Build or the Probe data sets.

Tuesday, 23 January 2018

Hash Join Overflow Costing #2

In my previous post on costing a Hash Join overflow to disk I came up with a formula for the cost of a Hash Join operation that overflows to disk based on tests I had done, but I also mentioned that there might be other factors involved not yet exposed by my testing. My main concern was whether the disk I/O's involved were all of the same type, or a mix of single block and multi-block disk I/O's. I've now done some further testing, and have something to share.

The tests show that all the disk I/O's are costed by the Optimizer as single block disk I/O's. I say this because I changed the cost of multi-block disk I/O's and the cost reported for the Hash Join operation itself did not change. Hence it must be costed as single block disk I/O's by the Optimizer.

The tests I was doing were a join between two tables that used a Hash Join in the execution plan (see previous post for the SQL used in this query). The key points are:
  • The two source tables are accessed by Full Table Scans (TABLE ACCESS FULL) in the execution plan
  • These feed into a HASH JOIN operation
  • Which in turn feeds into a final operation - a SORT AGGREGATE for the SUM's in the SELECT output
  • By adding together the costs of the source data accesses and subtracting from the total cost reported for the Hash Join, we get the net cost of the Hash Join operation itself
Oracle costs disk I/O operations using a set of statistics it has about the computer system it is running on - termed System Statistics and stored in the database itself. These can be a minimal set of just a few statistics or a more complete set of statistics providing a more detailed breakdown of different types of operations on the computer.

In my Oracle 12 instance ( I only have the "No Workload" default system statistics set in the database, which is the minimal set of statistics. This provides values for the following:
  • IOSEEKTIM - Time in milliseconds for a disk to move the read / write head to the track you want to access - default value of 10
  • IOTFRSPEED - Bytes transferred per millisecond once a transfer starts - default value of 4096
  • MBRC - Multi-block read count in blocks achieved on average - default value of 8
  • CPUSPEEDNW - Speed of CPU in units of millions of cycles per second (not relevant here)
Note that the MBRC here is not the same as the initialization parameter db_file_multiblock_read_count, even though their definitions seem to be the same. This MBRC is not set from the initialization parameter, and remains the same within the system statistics until explicitly changed (one way or another). MBRC is intended to represent the actual size of multi-block reads that were achieved on your computer system, and is set when Workload statistics are collected or when set manually.

The Optimizer uses these minimal system statistics to derive the values for the elapsed times for single and multi-block reads (SREADTIM and MREADTIM respectively). The formulae it uses are:
With the default values for System Statistics as stated, and a database block size of 8192 (8 KB) this gives values of SREADTIM = 12 ms, MREADTIM = 26 ms.

What this really means for the Optimizer is that a multi-block read takes 26 / 12 = 2.167 times longer than a single block read i.e. its "cost" is 2.167 times that of a single block read. This is used by the Optimizer when costing multi-block read based operations - it "converts" the multi-block read time into an equivalent number of single block read operations, which all have a cost unit of 1 and are the basis for how the Optimizer reports and compares execution plan costs.

If I were to increase the MBRC from 8 to 16, then the MREADTIM would increase to 42 ms, and a cost factor of 42 / 12 = 3.5 times that of a single block read. The relative net cost though would actually decrease, because each multi-block read would now be reading 16 blocks in one disk I/O rather than 8 blocks i.e. half the number of disk reads are needed to read the same number of blocks in from disk, at a relative cost ratio of 1.615 (3.5 / 2.167). So the decrease in the number of disk reads is greater than the relative increase in cost per disk read.

If the Hash Join operation involved multi-block disk reads then changing MBRC would change the relative cost of those disk reads and we would see a change in the net cost of the Hash Join as reported in the execution plan.

I changed the value of MBRC manually and then shutdown and restarted the database:
exec dbms_stats.set_system_stats ('MBRC', 16)
shutdown immediate
This was to ensure that nothing was still present in memory in the SGA - neither any existing execution plans, or any system statistics used to cost those execution plans.

When I ran the same test queries again, the total costs decreased because the source data accesses are full table scans which use multi-block reads i.e. this was expected, and so the total cost reported at the Hash Join step decreased. However, when the costs of the source data accesses were subtracted from the total cost after the Hash Join step, the net cost of the Hash Join operation itself was exactly the same as it was before for the same query.

Likewise when I increased the MBRC value to 32 there was no change in the net cost of the Hash Join operation when it overflowed to disk.

The conclusion then is that no multi-block disk reads are used within the costing of the Hash Join operation by the Optimizer when the hash table it uses overflows to disk.

Which means that the formula I posted before for an overflowing Hash Join cost is not dependent on multi-block reads at all:-
  • ( ((Build Columns Size + 12) * Build Row Count) + ((Probe Columns Size + 12) * Probe Row Count) ) * 0.0475
Where the "Columns Size" is the sum of the hash table storage for each column i.e. data storage + 2 bytes per column.

Thursday, 11 January 2018

Hash Join Overflow Cost Formula #1


The Hash Join join method was introduced in Oracle version 7 (7.3 specifically I believe), and one of its main goals was to be a method that lent itself well to being parallelisable. However, it is such an efficient join method for larger data volumes even in serial execution that it is often picked by the Optimizer over Nested Loops or Sort Merge because of its lower execution cost. This makes the Hash Join method probably the most frequently used method by the Oracle Optimizer, often appearing in execution plans for SQL queries.

A Hash Join works by building a table in memory containing the data from the first data set (termed the Build data set), and then reading the second data set (the Probe data set) to lookup each data row into the in-memory table for any match (join). The in-memory table is structured and accessed by applying a "hash function" to the relevant join data columns. This hashing has various benefits around performance, handling any kind of input data value range, and distributing the input values within the table (minimising bunching values together).

When this hash table fits in memory the additional cost of the Hash Join operation is negligible because it only involves CPU and memory operations and these are multiple orders of magnitude faster than disk accesses i.e. there is often little or no difference to the total reported cost of the Hash Join over the sum of the costs of access of each source data set. However, when the hash table needed is larger than can fit in available memory in the PGA then it must overflow to disk, which in turn significantly increases the cost of the Hash Join operation itself.

A question I have had for a long time is "How does Oracle cost this overflowing Hash Join operation"? Can I replicate this cost formula and understand what the main factors are within this reported cost? Which is a bigger factor - the size of the Build data set or the Probe data set? Knowing such things it might offer the possibility of gaining some insights into ways of tuning such large hash joins. At the least I would know more about how the overflowing hash join actually works in practice.

Jonathan Lewis gives a description in his Cost Based Oracle Fundamentals book of how the overflowing Hash Join operation works, with a formula for the major cost components involved. However, I have always found this formula to be more descriptive than quantitative, and to not be easy to use to arrive at a comparable value to what the Optimizer has reported.

I would like a more straightforward quantitative formula that I could use myself to estimate whether a Hash Join will overflow to disk or not, and how much its cost will be. After some research I believe I have arrived at such a formula which I will share here. Note that I'm not saying this is a "perfect" formula, just that this is the conclusion I have arrived at so far as a result of the tests I have done, and it seems to fit the results I have very well. I'll continue to post more details when I refine or revise this in the future

Why Hash Join Overflows

The limit to the size of a single Hash Table or other "work area" in the PGA is determined by a hidden, internal initialization parameter of "_smm_min_size". If the size of the Hash Table needed would be larger than this, then the Optimizer assumes that it will overflow to disk and costs it accordingly. My notes say that the value of "_smm_min_size" is the larger of 128 KB or 0.1% of PGA_AGGREGATE_TARGET. I cannot find exactly where I got this information from, but my memory is telling me that it was from Randolf Geist, possibly in a response to a question on one of the Oracle forums.

The main reason Oracle limits the potential size of a Hash Table is to ensure that a high number of other queries can run concurrently and not be starved of memory within the PGA. The Optimizer is assuming a worst case scenario to make sure other sessions do not suffer when a very large query is executed by one session. However, it is possible that when executed the hash table will not overflow to disk i.e. if at the moment that query is executed there is enough free memory in the PGA, then Oracle will let that session have a larger "work area" than the value of "_smm_min_size". So even though the Optimizer has costed the Hash Join operation as an overflow to disk and costed it accordingly, it does not mean that it will always overflow to disk when executed.

How Overflow Hash Join Works

Jonathan Lewis gives a description in his book of how the Hash Join works when it overflows to disk. I won't repeat the details here as they are not particularly relevant at the end of the day. But a crude summary would be that:
  • The first data set is read once and broken into "chunks" that are written out to disk (temporary storage), where each chunk is a contiguous sub-set of the overall hash table
  • One or more of these chunks are then kept in memory ready for the pass over the second data set
  • The second data set is read in and hashed in the normal way:-
    • If it hashes to an in-memory chunk then it is matched as normal
    • Otherwise it is written out to disk (temporary storage) split out into chunks on the same basis as the first data set
  • Then remaining chunks of the first data set are read into memory, and the corresponding chunks of the second data set read again and matched as normal. This is repeated until all of both data sets have been processed.
Essentially this is saying that there will be an additional pass over both sets of data - after the first read of each data set (already costed within the execution plan), there is an additional write out to disk of the majority of each data set, followed by a read back of each data set. Also extra reads and writes may be needed in the first pass of each data set, to keep the data in the "chunks" properly grouped together on disk.

It is not clear whether these write and read operations will be single block or multi-block disk read operations, or how many of them there will be. Potentially multi-block reads could be used when reading back in the pre-hashed chunks from disk. Luckily though this turns out to be irrelevant to the cost formula I have arrived at.

Deriving Hash Join Overflow Cost

Here is how I went about it. I created a set of standard test tables (see later for SQL DDL), each with a mix of NUMBER and VARCHAR2 columns to pad them out a bit, and populated them using a repeatable "connect by" data generator with a different number of rows in each test table. I then ran a query joining 2 of these tables together (see later for SQL), immediately examined the execution plan (from dbms_xplan.display_cursor) and noted the costs of each operation.

Without any indexes on any of these tables the execution plan was always 2 Full Table Scans feeding into a Hash Join operation. When the smaller, build table became large enough the Hash Join operation would overflow to disk, causing its cost to rise significantly, and a "TempSpc" column to appear in the execution plan with a reported value.

By varying only one thing at a time between queries I could see how the Hash Join cost changed when it was overflowing to disk. I was not interested in those executions where the Hash Join did not overflow to disk i.e. where the hash table did fit in memory. Only those executions that involved the Optimizer assuming it would overflow to disk. By examining the change in cost for the Hash Join operation for a corresponding change in only one of the joined tables I could deduce a multiplying factor being used within the underlying Hash Join cost calculation.

My Oracle version is 12.1 on Oracle Linux, so my results are only guaranteed to be accurate for that version. I would assume the results should be the same for 11g, as I don't think anything significant has changed in how the Hash Join operation is costed, but that would need to be verified.
Oracle Database 12c Enterprise Edition Release - 64bit Production
PL/SQL Release - Production
CORE Production
TNS for Linux: Version - Production
NLSRTL Version - Production

Hash Join Overflow Formula and its Accuracy

I started by comparing the Hash Join cost when the number of columns in a data set changed i.e. the size of the data set increased for the same number of rows. Jonathan Lewis states that the memory needed per column is the storage size of the column itself plus 2 additional bytes. I observed that the Hash Join cost changed by a factor of 0.0475 per byte per 1000 rows. And that this multiplying factor was the same under different row counts in either table in the query.

This only involves the columns needed by the query itself, which the Optimizer must extract and process, and not all of the columns in the table. In this case it is the columns referenced in the "SELECT" list and those referenced in the "WHERE" clause. And the "storage size" is the number of bytes that Oracle uses to store that data on disk, which is not necessarily 1 byte per value or 1 byte per character or digit. For instance, a NUMBER is stored as 2 digits per byte.

My other observation was that when only the row count in one table changed the Hash Join cost changed by a factor of 0.5675 per 1000 rows. As this was a constant per row I wondered if this was something to do with some extra data per row causing extra disk I/Os. And 0.5675 divided by 0.0475 gives 11.9474 which is almost 12, implying a 12 byte overhead per data row within the Hash Table.

Based on this, I arrived at the following formula for an overflowing Hash Join cost:
  • ( ((Build Columns Size + 12) * Build Row Count) + ((Probe Columns Size + 12) * Probe Row Count) ) * 0.0475
Where the "Columns Size" is the sum of the hash table storage for each column i.e. data storage + 2 bytes per column.

I then checked the calculated costs from this formula against the series of test queries I had been using, and the resultant cost for the overflowing Hash Join came out almost the same in all cases. The percentage difference was under 1% in almost all cases, which I take to be a high degree of accuracy. The only anomalies are for when the "build data set" is only just bigger than can fit in the PGA, but even then it is only a 3% difference. As the size of the build data set increased so the percentage difference decreased.

On the one hand it might not be surprising to some people that my derived formula produces the same results using the same inputs that were used to create the formula in the first place. However, given that the queries I tested varied both the number of columns being selected, the number of columns being joined on, and the number of rows in each table, these would appear to cover the only variables relevant to this formula. And in each of these cases the change in the reported Hash Join cost from Oracle was always a multiplier of this fixed constant of 0.0475.

Other Factors

While I do believe that this formula is true and valid for the system I was testing on, it may not be true for all other systems. It is likely that the multiplying factor of 0.0475 will be different on other systems. Given that this additional cost for the overflowing Hash Join is due to the additional disk I/Os involved, then it would seem likely that changes to the system statistics inside Oracle for disk read times would result in a change in the value of this multiplying factor. I will investigate this in my next series of tests.

There may or may not be some small "constant cost value" involved as well within the formula, for some kind of constant overhead within the overflowing Hash Join operation. This "constant cost value" would become negligible at higher data volumes compared to the costs for the build and probe data sets, but it might explain the slightly larger difference in calculated cost at the smallest overflowing data set size.

There is also the concept of "one pass" and "multi-pass" hash joins within Oracle, as well as "optimal" hash joins. I don't understand the real difference between these, other than "optimal" is when it fits in memory and the other two are when it overflows to disk. It is possible that what I've seen has been the cost for "one pass" overflowing hash joins, and for even larger data sets a "multi-pass" hash join would be used that would involve a different cost formula.

The SQL for the table and query

Here is the SQL to create one example table - they are all the same but for name and row counts - and the query used.

Create Table - run from SQL*Plus with 2 command line arguments of the row count and a table name suffix e.g. "@crhjtab 1000000 100k".
create table hj&2
tablespace testdata
select r pid
     , 1 one
     , 2 two
     , 3 three
     , 4 four
     , 5 five
     , 10 ten
     , trunc (r / 10) per10
     , trunc (r / 100) per100
     , mod (r, 10) mod10
     , mod (r, 100) mod100
     , mod (r, 1000) mod1000
     , mod (r, 10000) mod10000
     , mod (r, 100000) mod100000
     , 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' filler2
     , 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' filler3
  from (select rownum r
          from (select rownum r from dual connect by level <= 1000) a,
               (select rownum r from dual connect by level <= 1000) b,
               (select rownum r from dual connect by level <= 1000) c
         where rownum <= &1) 
exec dbms_stats.gather_table_stats (user, upper ('hj&2') )
Query - run from SQL*Plus with 2 command line arguments of table name suffixes e.g. "@hj0 100k 200k"
select /* HashTest0 */ sum ( sum_b1
  from hj&1 hj1, hj&2 hj2
 where = hj2.per10 
   and hj1.mod10 = hj2.mod100 ;
select * from table (dbms_xplan.display_cursor) ;
I've only shown one query here, as the others I used are almost the same but for the columns in the "SELECT" list. The variations of this query had different numbers of columns in the "SELECT" list, to increase the number of columns from the build and/or the probe tables.

Wednesday, 3 January 2018

Reading AWR Reports #2 - Report Overview

In the first post on Reading AWR Reports I made the point that you should first be clear on the details of the "performance problem" you are investigating. If there is no specific problem reported then there is no point looking for something that is not there in an AWR report. I also stated that an AWR Report is one amongst several tools available to you that you can use to investigate Oracle database performance problems, and you should make sure that a database wide AWR Report is the most suitable tool to be using for the specific performance problem you are currently tasked with investigating.

Assuming that has all been addressed, then the first thing I do with an AWR Report is a few high level checks - a kind of sanity check to get a feel for whether there might be some kind of performance issue there or not.

A performance problem is normally when a task takes too long to complete, and that is affected by the various resources it uses while doing its work. The key measurements are therefore both sides of this - the input work requests, and the system resource usage while doing that work. I basically want to check:
  • The resource usage as a percentage of system capacity i.e. utilisation
  • The amount of work requested / done (assuming they are the same), which is really SQL statements executed for a database
  • Amount of "wait time" within "SQL execution time" i.e. wait as percentage of work time
  • Top wait events to see how these correlate with the previous observations
That's it. Nothing more detailed for the first pass. If the input workload is high enough and the resource utilisation high enough and something looks wrong then I follow up with a second pass of the AWR Report diving down into more details based on what the first pass showed up.

Lets use the following AWR Report to show what I mean:

DB Name         DB Id    Instance     Inst Num Startup Time    Release     RAC
------------ ----------- ------------ -------- --------------- ----------- ---
O12DB         3429470280 o12db               1 05-Dec-17 06:59  NO

Host Name        Platform                         CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
xxxxxxxxxx.local Linux x86 64-bit                    2     2       1       7.80

              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:        79 05-Dec-17 09:53:43       142       3.0
  End Snap:        80 05-Dec-17 10:03:44       142       4.1
   Elapsed:               10.03 (mins)
   DB Time:                0.39 (mins)

Top 10 Foreground Events by Total Wait Time
                                           Total Wait       Wait   % DB Wait
Event                                Waits Time (sec)    Avg(ms)   time Class
------------------------------ ----------- ---------- ---------- ------ --------
DB CPU                                           12.4              53.8
log file sync                        2,727        9.3       3.42   40.4 Commit
db file sequential read                177        1.2       6.82    5.2 User I/O
control file sequential read           827         .1       0.12     .4 System I
The snapshot duration was 10 minutes, and the system had 2 CPU Cores, so there were 20 minutes of CPU capacity available. The database processing time is reported as "DB Time" and is 0.39 minutes, which is about 1.95% resource utilisation. From this I can see that the database was doing very little work at all, so there is not a database wide performance problem worth investigating. Even though the Top Wait Events seem to show that 40% of the time was spent waiting on the "log file sync" event, the actual amount of Time is very trivial - 9.3 seconds of waiting out of a 600 second period (10 minutes). Such low workloads and resource utilisation can lead to various measurement anomalies, so it is not worth trying to drill down further into the wait event details. Any actual problem is probably specific to a single session, and should be investigated using session specific tools.

Another AWR Report:

DB Name         DB Id    Instance     Inst Num Startup Time    Release     RAC
------------ ----------- ------------ -------- --------------- ----------- ---
O12DB         3429470280 o12db               1 05-Dec-17 06:59  NO

Host Name        Platform                         CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
xxxxxxxxx.local Linux x86 64-bit                    2     2       1       7.80

              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:        71 05-Dec-17 07:55:35        39       1.8
  End Snap:        72 05-Dec-17 08:05:39        40       2.0
   Elapsed:               10.06 (mins)
   DB Time:                9.35 (mins)

Load Profile                    Per Second   Per Transaction  Per Exec  Per Call
~~~~~~~~~~~~~~~            ---------------   --------------- --------- ---------
      Redo size (bytes):       3,470,831.7           5,403.2
  Logical read (blocks):          61,507.4              95.8
          Block changes:          20,563.9              32.0
             User calls:             495.9               0.8
           Parses (SQL):             302.6               0.5
         Executes (SQL):          13,287.8              20.7
           Transactions:             642.4

Top 10 Foreground Events by Total Wait Time
                                           Total Wait       Wait   % DB Wait
Event                                Waits Time (sec)    Avg(ms)   time Class
------------------------------ ----------- ---------- ---------- ------ --------
DB CPU                                          372.9              66.5
log file sync                      272,633      196.8       0.72   35.1 Commit
db file sequential read                472        2.2       4.75     .4 User I/O
Again, same database instance and a 10 minute period of time but a different workload. Now we see that the resource utilisation is up at 46.47%, being 9.35 minutes of DB Time out of 20 minutes of potential CPU capacity. From the Load Profile we can see that the database was executing over 13,000 SQL statements per second. From the wait events we see that "DB CPU" is 66.5% of the "DB Time" i.e. doing real work. So waiting as a percentage of SQL execution time would be about 33.5%. And we can see that almost all of this wait time is due to a single wait event - "log file sync".

Again, this does not directly tell us whether there is a performance problem or not. In all databases doing work there will be some wait events occurring, and one of these will always be top when ordered by wait time. That does not mean that it is necessarily a problem, or that it can be "fixed" in any way. In this case the workload is doing a lot of inserts and updates in small transactions, so there is a high frequency of commits which all involve writes to the redo logs. And we would expect the "log file sync" wait event to be there in the set of top 10 wait events in this scenario i.e. it is not a "problem" but a "result" of the workload running on the database.

Tuesday, 12 December 2017

Reading AWR Reports #1 - Start With The Problem

I hope that this will be the start of a short series of posts on how to read an AWR Report. I doubt that it will contain anything revolutionary, but it is clear that some people do not know where to start when presented with an AWR Report, so I aim to cover some essentials about this. This first post will cover what to consider before attempting to read and understand an AWR report, because ultimately AWR is just another tool and not the answer itself. I will get into the first things I look at in an AWR report in the next post.

As I've already indicated, AWR (Automatic Workload Repository) is really another tool in your toolkit for examining what happened on an Oracle database. And it is an extra cost option for the Oracle database Enterprise Edition, requiring that you purchase the Diagnostics Pack licence. Before using AWR you need to make sure that AWR is the right tool to use for the problem you have, rather than some other tool available to you. Which means that you should always start with the problem you are trying to fix, and not dive straight into using some tool or other and hope that it shows you the fault in the first screen's worth of output. You must be clear on what the problem is that you are trying to fix, the impact of that problem on users or the application or something else, and how you would measure the improvement to show that the problem has been resolved. If you don't know that information about the "problem", then you can never be sure that you have actually fixed the real cause of it - it may simply have disappeared for other reasons.

I'd strongly recommend following a top down approach to a performance problem investigation - start with the real world problem that users are experiencing, and work down and inwards from that towards the database. Do not start at the bottom directly on the database, and hope to find the problem quickly there. This is a "finding a needle in a haystack" approach. You might get lucky and find something related to the problem straight away, or you might not and end up wasting your time chasing dead ends. The "performance problem" may actually be somewhere else outside of the database, such as the client application itself, the network, or any application servers. Check and eliminate those first. Only after checking those should you move onto the database itself.

An AWR Report tells you what happened on your database between the two snapshots used. It does not tell you whether any problems did or did not occur, or what those problems might be. And it does not tell you whether your database was well behaved or not. It just provides a complete overview of everything that happened on that database between those two snapshots, and it is up to you do decide whether this meets expectations or not.

AWR reports are database wide i.e. across all the activity that happened on that database between the two snapshots. This includes activity from all sessions on all schemas in the database. So it is only really of use if the problem you are investigating is a system wide one, affecting most of the sessions and users of the database. And even then it is still not guaranteed to show you what the cause of your problem is, as it depends on what your problem is.

Furthermore, an AWR Report only tells you the grand totals of what happened over that period of time, and the averages of those (some total value divided by the elapsed time or some other measurement). It does not tell you the peak values, as it only knows the absolute change in measurements between the two snapshots being reported on. And average values over a period of time are always lower than the peak values during the most active period, because that is what an "average" means. How much lower the average values are than the peak values depends on a number of factors, such as the different time periods involved and the relative difference in activity between the peak period and the other periods.

A pitfall to avoid is just looking at the "top 10" type lists of SQL statements executed and wait events. In every database there is always a slowest SQL statement or top wait event, regardless of how active the database is. Just because that was the "slowest SQL statement" does not mean that it is a bad statement, or that it is related in any way to the problem you are investigating. You need to prove a link between the two, and not assume that they are related in any way. Trying to "improve" a "slow SQL statement" that is unrelated to the real problem users are experiencing, is just a waste of time. Check that any potential cause is really related to your problem, and not independent of it.

Rather than wade through a series of full single AWR Reports for each pair of snapshots you can instead Query the AWR data directly (again, providing you have paid for the Diagnostics Pack licence), and pull out just the key performance measurements you are interested in across a number of snapshots all at once. These might include Average Active Sessions, number of SQL statements executed, and percentage of time waiting. This can help performance investigations progress faster if you can more quickly identify the peak activity period across all the AWR snapshots available, and then drill down into only the most active period of time.

Remember that AWR provides database wide information over relatively long periods of time (between AWR snapshots), and that there are other tools out there for doing different types of performance analysis of Oracle databases:
  • Real time query and analysis of Oracle Dynamic Performance Views (V$ views), such as V$SESSION
  • Active Session History (ASH), which also requires the Diagnostics Pack license
  • SQL Trace to record all SQL statements executed by a session, and their elapsed times
Each of these have their advantages and disadvantages, and if you want to be an Oracle expert then you should be aware of these and their tradeoffs. You need to make sure that you are using the right tool for the kind of performance problem you are investigating.

In conclusion then, I'd say that my two main points in this post are:
  • Make sure you are clear on what the performance problem is that you are investigating
  • Check that AWR Reports are the right tool to be using to investigate this performance problem, rather than something else
Finally, a piece of blatant self advertising. My company, Bottleneck Data Solutions, does Oracle Performance Tuning amongst the Oracle Database Services it offers. We are currently offering a free AWR Report Review, where we will do a high level review of an AWR Report you send us and send back to you a short report of our findings. That's it - nothing complex or detailed, just a summary of what we think are the highlights of the AWR Report you sent us. If you want to take advantage of this, then follow the link to the web site, and use the "Contact" form to send us your AWR Report.

Wednesday, 22 November 2017

Chatty Applications and Simple SQL

One type of "poor performance" scenario I have come across a few times is due to what I call "chatty applications". These are applications that execute a disproportionally high number of what look like very simple SQL queries for every business transaction they do. And often this is a deliberate design choice by the application architects and developers, claiming that simpler SQL statements on single tables using indexed columns always leads to efficient execution plans, so it must be better than anything else. Unfortunately it is the high volume of execution of such "simple SQL queries" that is the cause of the poor performance these applications often experience. Both in terms of poor response times for individual users, and poor total throughput across all the users. Why then are such "chatty applications" the cause of poor performance and not the solution?

First, there are network round trips needed for each SQL statement executed, and these are typically milliseconds in terms of order of magnitude of elapsed time. The SQL statement text is sent from the client to the database server, and then the resultant data sent back from the server to the client. And when there are hundreds of SQL statements being executed per business transaction, then these all add up. Executing a "business transaction" in such a "chatty application" can end up spending more time transferring data back and forth over the network than actually doing work with that data on either the client or database server system.

Second, the high SQL execution volume results in high library cache usage and contention for access to it inside Oracle. For each SQL statement executed, Oracle will check if that SQL is already in the library cache (in the shared pool in the SGA) and if so will go ahead and use the associated execution plan. Such library cache look ups involve various temporary internal locks on the library cache, to make sure it doesn't change while your session is examining it. And the more SQL statements being executed leads to more library cache lookups and more contention for the controlling locks on it.

Third, the volume of data transferred to the client and then processed on it can be much higher than if a single, more complex SQL statement was used instead. This is also about network delays, but this time it is due to the amount of intermediate result data being transferred to the client, as it works its way through data from different tables as part of its "business transaction".

Ultimately this all a variation on the "row by row" processing mind set where all the raw data is pulled back into the client, which further processes it such as joining together data from multiple tables. And a "row set" based approach can be more efficient - executing fewer but more complex SQL statements on the database server, to send only the final resultant data back to the client.

I have seen this kind of scenario in action at a number of customers, where it was their application design that was slowing down and limiting the performance of their application. In some cases we were able to identify the highest use cases in the application and successfully modify it to use fewer, more complex SQL queries instead, so that more work is done per execution by far fewer SQL statements. In other cases it was not as clear due to the application developers using a "glue layer" of third party software to handle the application to database mappings and constructing the SQL queries to be executed in each case. Fixing this requires reconfiguration of the "glue layer" to issue more complex SQL queries involving multi-table joins. However, in at least one case the customer did not at that time have enough expertise in how to use this "glue layer" to achieve that. They had just used it in an "out of the box" type fashion, and were now suffering as a consequence of that.

Probably the worst case I saw was an application using another "glue layer" that only ever issued SQL queries for a single column of data at a time of the form "select one-column from table-name where pkid_col = value", and consequently did multiple such SQL queries to get each individual column it needed. And obviously this led to horrible performance and scalability. Luckily the application had been written to use a well defined internal API for data access, which was in turn wrapped around the "glue layer". We were able to insert an extra layer between the application's API and the "glue software", intercepting the application API data access calls before the "glue software" was called. We implemented a caching layer here, based on the primary key of a record from each table, which was always used on each query. For a new record query, with a different primary key value, the API now did a "select *" into a local record buffer of the whole data record, and then returned the one individual column requested. If the primary key requested matched that already in the current record buffer, then the code just returned the individual column value directly from that, avoiding any further SQL execution and executing any code in the "glue layer".

As you might expect performance increased significantly as a result of this change - replacing the multiple single column SQL queries with just one whole record SQL query followed by in-memory, local accesses, avoiding all those network round trips. Also the net load on the Oracle database server decreased significantly, with less contention on the library cache, and the CPU and disk utilisation decreased significantly as well. Consequently the performance of all the other SQL statements being executed on that database server improved as well due to those resources being freed up. A definite "win-win" all around, by accessing the data in the database in a much more efficient way.