Welcome to db_io_metrics, a new utility to display database physical IO metrics in real time

The db_io_metrics.pl utility is used to display database physical IO real-time metrics. It basically takes a snapshot each second (default interval) from the gv$filestat and gv$tempstat cumulative views and computes the delta with the previous snapshot. The utility is RAC and Multitenant aware.

This utility:

  • provides useful metrics.
  • is RAC aware.
  • detects if it is connected to a multitenant database and then is able to display the containers metrics.
  • is fully customizable: you can aggregate the results depending on your needs.
  • does not install anything into the database.

It displays the following metrics:

  • Reads/s: Number of read per second.
  • KbyRead/s: Kbytes read per second.
  • Avg ms/Read: ms per read in average.
  • AvgBy/Read: Average Bytes per read.
  • Same metrics are provided for Write Operations.

At the following levels:

  • Database Instance.
  • Database container.
  • Filesystem or ASM Diskgroup.
  • Tablespace.
  • Datafile.

Let’s see the help:

./db_io_metrics.pl -help

Usage: ./db_io_metrics.pl [-interval] [-count] [-inst] [-cont] [-fs_dg] [-tbs] [-file] [-fs_delimiter] [-show] [-display] [-sort_field] [-help]

 Default Interval : 1 second.
 Default Count    : Unlimited

  Parameter         Comment                                                           Default
  ---------         -------                                                           -------
  -INST=            ALL - Show all Instance(s)                                        ALL
                    CURRENT - Show Current Instance
  -CONT=            Container to collect (wildcard allowed)                           ALL
  -FS_DG=           Filesystem or Diskgroup to collect (wildcard allowed)             ALL
  -TBS=             Tablespace to collect (wildcard allowed)                          ALL
  -FILE=            Datafile to collect (wildcard allowed)                            ALL
  -FS_DELIMITER=    Folder which follows the FS mount point                           ORADATA
  -SHOW=            What to show: inst,cont,fs_dg,tbs,file (comma separated list)     INST
  -DISPLAY=         What to display: snap,avg (comma separated list)                  SNAP
  -SORT_FIELD=      reads|writes|iops                                                 NONE

Example: ./db_io_metrics.pl
Example: ./db_io_metrics.pl  -inst=BDT_1
Example: ./db_io_metrics.pl  -show=inst,tbs
Example: ./db_io_metrics.pl  -show=inst,tbs -tbs=%UNDO%
Example: ./db_io_metrics.pl  -show=fs_dg
Example: ./db_io_metrics.pl  -show=inst,fs_dg -display=avg
Example: ./db_io_metrics.pl  -show=inst,fs_dg -sort_field=reads
Example: ./db_io_metrics.pl  -show=inst,tbs,cont
Example: ./db_io_metrics.pl  -show=inst,tbs,cont -cont=%P_%
Example: ./db_io_metrics.pl  -show=inst,cont -sort_field=iops

The main options/features are:

  1. You can choose the number of snapshots to display and the time to wait between snapshots.
  2. You can choose on which database instance to collect the metrics thanks to the –INST= parameter.
  3. You can choose on which database container to collect the metrics thanks to the -CONT= parameter (wilcard allowed).
  4. You can choose on which Diskgroup or Filesystem to collect the metrics thanks to the -FS_DG= parameter (wildcard allowed).
  5. You can choose on which tablespace to collect the metrics thanks to the -TBS= parameter (wildcard allowed).
  6. You can choose on which datafile to collect the metrics thanks to the -FILE= parameter (wildcard allowed).
  7. You can choose which folder is your Filesystem delimiter thanks to the -FS_DELIMITER= parameter.
  8. You can aggregate the results on the database instances, containers, diskgroups or filesystems, tablespaces level thanks to the –SHOW= parameter.
  9. You can display the metrics per snapshot, the average metrics value since the collection began (that is to say since the script has been launched) or both thanks to the –DISPLAY= parameter.
  10. You can sort based on the number of reads, number of writes or number of IOPS (reads+writes) thanks to the –SORT_FIELD= parameter.

Let’s see some use cases:

Report the physical IO metrics for the database instances:

./db_io_metrics.pl -show=inst
............................
Collecting 1 sec....
............................

......... SNAP TAKEN AT ...................

12:55:31                                                                                Kby      Avg       AvgBy/               Kby       Avg        AvgBy/
12:55:31   INST         CONT         FS_DG      TBS            FILE           Reads/s   Read/s   ms/Read   Read      Writes/s   Write/s   ms/Write   Write
12:55:31   ----------   ----------   --------   ------------   ------------   -------   ------   -------   ------    --------   -------   --------   ------
12:55:31   CBDT1                                                              376.0     3008     5.8       8192      0.0        0         0.0        0
12:55:31   CBDT2                                                              346.0     2768     15.6      8192      0.0        0         0.0        0

Report the physical IO metrics per database instances and per containers and sort by iops:

./db_io_metrics.pl -show=inst,cont -sort_field=iops
............................
Collecting 1 sec....
............................

......... SNAP TAKEN AT ...................

12:57:59                                                                                Kby      Avg       AvgBy/               Kby       Avg        AvgBy/
12:57:59   INST         CONT         FS_DG      TBS            FILE           Reads/s   Read/s   ms/Read   Read      Writes/s   Write/s   ms/Write   Write
12:57:59   ----------   ----------   --------   ------------   ------------   -------   ------   -------   ------    --------   -------   --------   ------
12:57:59   CBDT2                                                              293.0     2344     18.8      8192      0.0        0         0.0        0
12:57:59   CBDT2        CDB$ROOT                                              150.0     1200     18.4      8192      0.0        0         0.0        0
12:57:59   CBDT2        P_1                                                   143.0     1144     19.2      8192      0.0        0         0.0        0
12:57:59   CBDT2        PDB$SEED                                              0.0       0        0.0       0         0.0        0         0.0        0
12:57:59   CBDT2        P_2                                                   0.0       0        0.0       0         0.0        0         0.0        0
12:57:59   CBDT2        P_3                                                   0.0       0        0.0       0         0.0        0         0.0        0
12:57:59   CBDT1                                                              274.0     2192     9.1       8192      0.0        0         0.0        0
12:57:59   CBDT1        CDB$ROOT                                              274.0     2192     9.1       8192      0.0        0         0.0        0
12:57:59   CBDT1        PDB$SEED                                              0.0       0        0.0       0         0.0        0         0.0        0
12:57:59   CBDT1        P_1                                                   0.0       0        0.0       0         0.0        0         0.0        0
12:57:59   CBDT1        P_2                                                   0.0       0        0.0       0         0.0        0         0.0        0
12:57:59   CBDT1        P_3                                                   0.0       0        0.0       0         0.0        0         0.0        0

Report the physical IO metrics per database instances and per filesystem or ASM diskgroup:

./db_io_metrics.pl -show=inst,fs_dg
............................
Collecting 1 sec....
............................

......... SNAP TAKEN AT ...................

13:00:39                                                                                Kby      Avg       AvgBy/               Kby       Avg        AvgBy/
13:00:39   INST         CONT         FS_DG      TBS            FILE           Reads/s   Read/s   ms/Read   Read      Writes/s   Write/s   ms/Write   Write
13:00:39   ----------   ----------   --------   ------------   ------------   -------   ------   -------   ------    --------   -------   --------   ------
13:00:39   CBDT1                                                              349.0     2792     8.3       8192      0.0        0         0.0        0
13:00:39   CBDT1                     DATA                                     349.0     2792     8.3       8192      0.0        0         0.0        0
13:00:39   CBDT2                                                              272.0     2176     25.4      8192      0.0        0         0.0        0
13:00:39   CBDT2                     DATA                                     272.0     2176     25.4      8192      0.0        0         0.0        0

Report the physical IO metrics per… I let you finish the sentence as the utility is customizable enough 😉

Remarks:

  • The metrics are “only” linked to the datafiles and tempfiles (See the oracle documentation for more details).
  • In case your database is on Filesystems, you have to change the “FS_DELIMITER” argument to aggregate the metrics at the Filesystem level. For example, if the FS are :
Filesystem             size   used  avail capacity  Mounted on
devora11-data/u500     960G   927G    33G    97%    /ec/dev/server/oracle/devora11/u500
devora11-data/u501     960G   767G   193G    80%    /ec/dev/server/oracle/devora11/u501
devora11-data/u502     500G   445G    55G    89%    /ec/dev/server/oracle/devora11/u502

And the datafiles are located “behind” the oradata folder:

/ec/dev/server/oracle/devora11/u500/oradata/BDT
/ec/dev/server/oracle/devora11/u501/oradata/BDT
/ec/dev/server/oracle/devora11/u502/oradata/BDT

Then you can launch the utility that way:

./db_io_metrics.pl -show=inst,fs_dg -fs_delimiter=oradata
............................
Collecting 1 sec....
............................

......... SNAP TAKEN AT ...................

11:00:35                                                                               			    Kby      Avg       AvgBy/               Kby       Avg        AvgBy/
11:00:35   INST         FS_DG              			TBS              FILE             Reads/s   Read/s   ms/Read   Read      Writes/s   Write/s   ms/Write   Write
11:00:35   ----------   ----------------   			--------------   --------------   -------   ------   -------   ------    --------   -------   --------   ------
11:00:35   BDT                                                           			  129.0     1032     8.3       8192      0.0        0         0.0        0
11:00:35   BDT          /ec/dev/server/oracle/devora11/u500/                                      67.0      536      10.0      8192      0.0        0         0.0        0
11:00:35   BDT          /ec/dev/server/oracle/devora11/u501/                                      22.0      176      5.5       8192      0.0        0         0.0        0
11:00:35   BDT          /ec/dev/server/oracle/devora11/u502/                                      40.0      320      7.0       8192      0.0        0         0.0        0

 

  • If you don’t want to see the FS (do not use -show=fs_dg), then there is no need to specify the -fs_delimiter argument.
  • Reading the good article A Closer Look at CALIBRATE_IO from Luca Canali gave me the idea to create this utility.
  • If you are interested in those real-time metrics at the ASM level, you can have a look to the asm_metrics utility.

You can download it from this repository or copy the source code from this page.

Advertisement

Reduce resource consumption and clone in seconds your oracle virtual environment on your laptop using linux containers and btrfs

Last week I wanted to create a new oracle virtual machine on my Laptop and I discovered that the disk space on my SSD device was more or less exhausted. Then I looked for a solution to minimize the disk usage of my oracle virtual machines.

I find a way to achieve my need based on those technologies:

  • Linux Containers: a lightweight virtualization solution for Linux.
  • btrfs file system: a file system that allows to create snapshots almost instantly and consume virtually no additional disk space as a snapshot and the original it was taken from initially share all of the same data blocks.

In this post I will show how I create an oracle environment on my laptop with those technologies and how we can clone a container, an ORACLE_HOME and a database in a few seconds with initially no additional disk space.

Note:  The goal is to create a “test” environment on your laptop. I would not suggest to follow this installation process on a “real” system 😉

PREPARATION PHASE

Step 1: let’s create a OEL 6.5 virtual machine (named lxc) using virtualbox. This virtual machine will host our Linux containers, oracle software and databases.

Step 2: Install lxc and btrfs into the virtual machine created into step 1.

[root@lxc ~]# yum install btrfs-progs
[root@lxc ~]# yum install lxc
[root@lxc ~]# service cgconfig start
[root@lxc ~]# chkconfig cgconfig on
[root@lxc ~]# service libvirtd start
[root@lxc ~]# chkconfig libvirtd on

Step 3: Add a btrfs file system into the virtual machine (This file system will receive the oracle software and databases). To do so, add a disk to your virtualbox machine created in step 1, start the machine and launch the fs creation:

[root@lxc ~]# mkfs.btrfs /dev/sdb
[root@lxc ~]# mkdir /btrfs
[root@lxc ~]# mount /dev/sdb /btrfs
[root@lxc ~]# chown oracle:dba /btrfs
[root@lxc ~]# blkid /dev/sdb
/dev/sdb: UUID="3f6f7b51-7662-4d81-9a29-195e167e54ff" UUID_SUB="1d79e0d0-933d-4c65-9939-9614375da5e1" TYPE="btrfs"
Retrieve the UUID and put it into the fstab
[root@lxc ~]# cat >> /etc/fstab << EOF
UUID=3f6f7b51-7662-4d81-9a29-195e167e54ff /btrfs btrfs    defaults   0 0
EOF

Step 4: Add a btrfs file system into the virtual machine (This file system will receive the linux containers). To do so, add a disk to your virtualbox machine created in step 1, start the machine and launch the fs creation:

[root@lxc ~]# mkfs.btrfs /dev/sdc
[root@lxc ~]# mkdir /container
[root@lxc ~]# mount /dev/sdc /container
[root@lxc ~]# blkid /dev/sdc
/dev/sdc: UUID="8a565bfd-2deb-4d02-bd91-a81c4cc9eb54" UUID_SUB="44cb0a14-afc5-48eb-bc60-4c24b9b02ab1" TYPE="btrfs"
Retrieve the UUID and put it into the fstab
[root@lxc ~]# cat  >> /etc/fstab << EOF
UUID=8a565bfd-2deb-4d02-bd91-a81c4cc9eb54 /container btrfs    defaults   0 0
EOF

Step 5: Create btrfs subvolume for the database software and databases.

[root@lxc ~]# btrfs subvolume create /btrfs/u01
Create subvolume '/btrfs/u01'
[root@lxc ~]# btrfs subvolume create /btrfs/databases
Create subvolume '/btrfs/databases'
[root@lxc ~]# chown oracle:dba /btrfs/u01
[root@lxc ~]# chown oracle:dba /btrfs/databases

Step 6: add the hostname into /etc/hosts

[root@lxc btrfs]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 lxc
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

and Install the 12cR1 database software with:

Oracle Base: /btrfs/u01
Software location: /btrfs/u01/product/12.1.0/dbhome_1
Inventory directory: /btrfs/u01/oraInventory
oraInventory Group Name: dba

Step 7: Create a simple database with datafiles, redologs and controlfile located into the /btrfs/databases folder.

Step 8: Create a linux container (using oracle template) that will be the source of all our new containers.

lxc-create --name cont_source -B btrfs --template oracle -- --url http://public-yum.oracle.com -R 6.latest -r "perl sudo oracle-rdbms-server-12cR1-preinstall"

Here we are: we are now ready to clone all of this into a new linux container in seconds without any additional disk usage.

CLONING PHASE

First, let’s take a picture of the current disk usage:

[root@lxc ~]# df -h 
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/vg_lxc-lv_root   45G  3.1G   40G   8% /
tmpfs                       2.0G     0  2.0G   0% /dev/shm
/dev/sda1                   477M   55M  398M  13% /boot
/dev/sdb                     50G  6.4G   42G  14% /btrfs
/dev/sdc                     50G  1.1G   48G   3% /container

Clone step 1:  Add into  /etc/security/limits.conf  (If not, you won’t be able to su – oracle into the linux containers)

*   soft   nofile    1024
*   hard   nofile    65536
*   soft   nproc    2047
*   hard   nproc    16384
*   soft   stack    10240
*   hard   stack    32768

and reboot the virtual machine created into step 1.

Clone step 2:  clone the linux container created during step 8 to a new one named for example dbhost1.

[root@lxc oradata]# time lxc-clone -s -t btrfs -o cont_source -n dbhost1
Tweaking configuration
Copying rootfs...
Create a snapshot of '/container/cont_source/rootfs' in '/container/dbhost1/rootfs'
Updating rootfs...
'dbhost1' created

real    0m0.716s
user    0m0.023s
sys     0m0.029s

Clone step 3: clone the database software.

[root@lxc oradata]# time btrfs su snapshot /btrfs/u01 /btrfs/u01_dbhost1
Create a snapshot of '/btrfs/u01' in '/btrfs/u01_dbhost1'

real    0m0.038s
user    0m0.000s
sys     0m0.006s

Clone step 4: clone the database (shutdown immediate before)

[root@lxc oradata]# time btrfs su snapshot /btrfs/databases /btrfs/databases_dbhost1
Create a snapshot of '/btrfs/databases' in '/btrfs/databases_dbhost1'

real    0m0.041s
user    0m0.002s
sys     0m0.006s

Clone step 5: Link the new container to this database software and database clones. Edit /container/dbhost1/config and put:

lxc.mount.entry=/btrfs/u01_dbhost1 /container/dbhost1/rootfs/btrfs/u01 none rw,bind 0 0
lxc.mount.entry=/btrfs/databases_dbhost1 /container/dbhost1/rootfs/btrfs/databases none rw,bind 0 0

Clone step 6: Copy dbhome, oraenv and coraenv and start the new container dbhost1

[root@lxc ~]# cp -p /usr/local/bin/coraenv /usr/local/bin/dbhome /usr/local/bin/oraenv /container/dbhost1/rootfs/usr/local/bin
[root@lxc oradata]# mkdir -p /container/dbhost1/rootfs/btrfs/u01
[root@lxc oradata]# mkdir -p /container/dbhost1/rootfs/btrfs/databases
[root@lxc oradata]# lxc-start -n dbhost1

Clone step 7: connect to the new container (default password for root is root), create the oratab, and start the database.

[root@lxc ~]# lxc-console -n dbhost1
[root@dbhost1 ~]# su  - oracle
[oracle@dbhost1 dbs]$ . oraenv
ORACLE_SID = [BDTDB] ? 
The Oracle base remains unchanged with value /btrfs/u01
[oracle@dbhost1 dbs]$ echo "startup" | sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Fri Apr 25 08:01:52 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> ORACLE instance started.

Total System Global Area 1570009088 bytes
Fixed Size                  2288776 bytes
Variable Size             905970552 bytes
Database Buffers          654311424 bytes
Redo Buffers                7438336 bytes
Database mounted.
Database opened.

Check the disk usage on our virtual machine created into step 1:

[root@lxc ~]# df -h
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/vg_lxc-lv_root   45G  3.1G   40G   8% /
tmpfs                       2.0G     0  2.0G   0% /dev/shm
/dev/sda1                   477M   55M  398M  13% /boot
/dev/sdb                     50G  6.4G   42G  14% /btrfs
/dev/sdc                     50G  1.1G   48G   3% /container

Et voila 😉 we created a new linux container, a new database home and a new database with initially no additional disk space.

Remarks:

  • Jason Arneil did a demo on Linux containers here.
  • Ludovico Caldara also shows how to save disk space when building a RAC with virtualbox linked clones.
  • If you are interested in how oracle databases/rac and lxc can work together, I suggest to read Alvaro Miranda’s stuff here (I mainly took my inspiration from his blog).
  • Ofir Manor describes another Linux container use case related to Hadoop cluster here.
  • Cloning the database software as I did here is not the “right” way (See MOS Doc ID 1154613.1). But this suits me fine for my laptop environment.

Conclusion:

We can create a new container, a new ORACLE_HOME and a new database, reducing resource consumption (specially disk) on our laptop in a few seconds.

Again: I would not suggest to use all of this on a “real” system. But for a test environment on a laptop it sound goods to me.

I hope you will save some disk space on your laptop thanks to this ;-).

SLOB Logical I/O testing: Check if your benchmark delivers the maximum

With this post, I would like to share one tip related to SLOB Logical I/O testing when you need to find out the maximum “Logical read (blocks) per second” your “benchmark” is capable of.

The tip is the rule number two, but let’s describe the well known rule number one first.

Rule number one: When testing Logical I/O you have to ensure that the AWR report generated by SLOB shows a “Buffer Hit %” of 100% and a “DB CPU” event that is more than 99% of the DB Time. That is to say something like:

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer  Hit   %:  100.00    In-memory Sort %:  100.00
-----------------------------------------------------------------


Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                            Tota    Wait   % DB
Event                                 Waits Time Avg(ms)   time Wait Class
------------------------------ ------------ ---- ------- ------ ----------
DB CPU                                      184.           99.6
control file sequential read            176   .1       0     .0 System I/O
Disk file Mirror Read                    43    0       0     .0 User I/O
Disk file operations I/O                 45    0       0     .0 User I/O

Question: Is that enough to ensure that the “benchmark” delivered the maximum ?

Answer: No, this is not enough.

For example during 2 Logical I/O tests (using only one reader), with those slob.conf parameters:

UPDATE_PCT=0
RUN_TIME=180
WORK_LOOP=0
SCALE=1000

I am able to generate (With rule number one followed, that is to say DB CPU=99.6% and Buffer Hit=100%) this load profile:

Load Profile                    Per Second   Per Transaction  Per Exec  Per Call
~~~~~~~~~~~~~~~            ---------------   --------------- --------- ---------
             DB Time(s):               1.0               8.9      0.00      1.09
              DB CPU(s):               1.0               8.8      0.00      1.09
      Redo size (bytes):           7,098.4          61,501.9
  Logical read (blocks):          35,568.6         308,171.6

Or this one:

 
Load Profile                    Per Second   Per Transaction  Per Exec  Per Call
~~~~~~~~~~~~~~~            ---------------   --------------- --------- ---------
             DB Time(s):               1.0               8.8      0.00      0.69
              DB CPU(s):               1.0               8.8      0.00      0.69
      Redo size (bytes):           6,684.2          57,877.5
  Logical read (blocks):       1,075,426.1       9,312,012.1

As you can see the number of Logical I/O per second delivered during the 2 benchmarks are completely different. Going from about 35 000 to about 1 million even if :

  1. The database did not record any wait events, meaning DB CPU is > 99%
  2. The Buffer Hit is 100%
  3. The number of reader is the same (One in this case: DB TIME = 1 second per second)

So, how can I check that the “benchmark” delivered the maximum ?

As slob launchs a query to generate the I/O, we have to check that the elapsed time(s) of the query is about “run time(s) * number of readers” that is to say 180 seconds in our case.

Let’s check the AWR “SQL ordered by Elapsed Time” section.

For the 35 000 Logical I/O run:

        Elapsed                  Elapsed Time
        Time (s)    Executions  per Exec (s)  %Total   %CPU    %IO    SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
           180.0              1        180.02   96.9   99.6     .0 0c1s032jgyrd7
Module: SQL*Plus
DECLARE v_default_schema_number PLS_INTEGER := '1'; v_update_pct PLS_INTEGER :=
'0'; v_max_loop_iterations PLS_INTEGER := '0'; v_seconds_to_run PLS_INTEGER := '
180'; v_scale PLS_INTEGER := '1000'; v_work_unit PLS_INTEGER := '1' ; v_redo_str
ess VARCHAR2(12) := 'HEAVY'; v_shared_data_modulus PLS_INTEGER := '0'; v_nt_arg

            72.1      3,230,605          0.00   38.8   99.9     .0 bhdvtsvjhgvrh
Module: SQL*Plus
SELECT COUNT(C2) FROM CF1 WHERE ( CUSTID > ( :B1 - :B2 ) ) AND (CUSTID < :B1 )

For the 1000000 Logical I/O run:

        Elapsed                  Elapsed Time
        Time (s)    Executions  per Exec (s)  %Total   %CPU    %IO    SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
           180.0              1        180.03   96.9   99.6     .0 g41kknwap6na9
Module: SQL*Plus
DECLARE v_default_schema_number PLS_INTEGER := '1'; v_update_pct PLS_INTEGER :=
'0'; v_max_loop_iterations PLS_INTEGER := '0'; v_seconds_to_run PLS_INTEGER := '
180'; v_scale PLS_INTEGER := '1000'; v_work_unit PLS_INTEGER := '1000' ; v_redo_
stress VARCHAR2(12) := 'HEAVY'; v_shared_data_modulus PLS_INTEGER := '0'; v_nt_

           172.1        195,156          0.00   92.7   99.7     .0 bhdvtsvjhgvrh
Module: SQL*Plus
SELECT COUNT(C2) FROM CF1 WHERE ( CUSTID > ( :B1 - :B2 ) ) AND (CUSTID < :B1 )

As you can see during the 1000000 Logical I/O run, the elapsed time of the SQL responsible of the Logical I/O is about 172 seconds while it is only about 72 seconds into the 35 000 Logical I/O run.

Which lead me to the rule number 2: We have to check that the elapsed time(s) of the SQL responsible of the Logical I/O is about “number of readers * run time(s)”.

For example for a run time of 180 seconds and 20 readers (./runit.sh 20) you should observe something like:

        Elapsed                  Elapsed Time
        Time (s)    Executions  per Exec (s)  %Total   %CPU    %IO    SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
         3,487.2      2,710,259          0.00   96.7   99.7     .0 bhdvtsvjhgvrh
Module: SQL*Plus
SELECT COUNT(C2) FROM CF1 WHERE ( CUSTID > ( :B1 - :B2 ) ) AND (CUSTID < :B1 )

Remarks:

  1. I had to play with the “WORK_UNIT” slob parameter to reach 1000000 Logical I/O per second.
  2. WORK_UNIT=256 per default (which was not enough on my system to deliver the maximum Logical I/O per second): You may need to increase it as well if the rule number 2 is not followed.

Conclusion:

To ensure that your SLOB benchmark delivers the maximum Logical I/O per second, you need to check:

  1. That the AWR report generated by SLOB shows a “Buffer Hit %” of 100% and a “DB CPU” event that is more than 99% of the DB Time.
  2. That the elapsed time(s) of the SQL responsible of the Logical I/O is about “number of readers * run time(s)”. If not, you may need to increase the WORK_UNIT slob parameter.