Joining the Accenture Enkitec Group

On April 1 2015, I’ll start working for the Accenture Enkitec Group. No, this is not a joke for the Fool’s day and I can’t still believe it. Why?, because:

  • Lot of people from whom I learned (and still learn) and for whom I have admiration for are working for this group.
  • I had the chance to meet and/or communicate through social media with: Andy Colvin, Kerry Osborne, Karl Arao, Frits Hoogland, Carlos Sierra, Mauro Pagano, Tanel Poder, Martin Bach and Veronica Stigers: They are all good, humble person and you can feel the passion when they talk about their jobs.
  • I see this group as an oracle community: Always happy to share their knowledge, to help and to share free tools (recent examples are: edb360 and SQLd360).
  • If you know me and you read this, then you should understand why I feel so happy.

I spent the last 3 years working at the European Commission (Data Center). I really enjoyed this job. But as you understood, working for the Accenture Enkitec Group is something that I can’t refuse!

This post is not about self-promoting (I am not a big fan of it) me or the company, it is just a dump of my mind and feelings.

Looking forward to start this new challenge!

Posted in Other | 4 Comments

processor_group_name and SGA distribution across NUMA nodes

Introduction

On a NUMA enabled system, a 12c database Instance allocates the SGA evenly across all the NUMA nodes by default (unless you change the “_enable_NUMA_interleave” hidden parameter to FALSE). You can find more details about this behaviour in Yves’s post.

Fine, but what if I am setting the processor_group_name parameter (to instruct the database instance to run itself within a specified operating system processor group) to a cgroup that contains more than one NUMA node?

I may need to link the processor_group_name parameter to more than one NUMA node because:

  • the database needs more memory that one NUMA node could offer.
  • the database needs more cpus that one NUMA node could offer.

In that case, is the SGA still evenly allocated across the NUMA nodes defined in the cgroup?

Time to test

My NUMA configuration is the following:

> lscpu | grep NUMA
NUMA node(s):          8
NUMA node0 CPU(s):     1-10,41-50
NUMA node1 CPU(s):     11-20,51-60
NUMA node2 CPU(s):     21-30,61-70
NUMA node3 CPU(s):     31-40,71-80
NUMA node4 CPU(s):     0,81-89,120-129
NUMA node5 CPU(s):     90-99,130-139
NUMA node6 CPU(s):     100-109,140-149
NUMA node7 CPU(s):     110-119,150-159

with this memory allocation (no Instances up):

 > sh ./numa_memory.sh
                MemTotal         MemFree         MemUsed           Shmem      HugePages_Total       HugePages_Free       HugePages_Surp
Node_0      136052736_kB     49000636_kB     87052100_kB        77728_kB                39300                39044                    0
Node_1      136052736_kB     50503228_kB     85549508_kB        77764_kB                39300                39044                    0
Node_2      136052736_kB     18163112_kB    117889624_kB        78236_kB                39300                39045                    0
Node_3      136052736_kB     47859560_kB     88193176_kB        77744_kB                39300                39045                    0
Node_4      136024568_kB     43286800_kB     92737768_kB        77780_kB                39300                39045                    0
Node_5      136052736_kB     50348004_kB     85704732_kB        77792_kB                39300                39045                    0
Node_6      136052736_kB     31591648_kB    104461088_kB        77976_kB                39299                39044                    0
Node_7      136052736_kB     48524064_kB     87528672_kB        77780_kB                39299                39044                    0

A cgroup (named oracle) has been created with those properties:

   group oracle {
   perm {
     task {
       uid = oracle;
       gid = dc_dba;
     }
     admin {
       uid = oracle;
       gid = dc_dba;
     }
   }
   cpuset {
     cpuset.mems=2,3;
     cpuset.cpus="21-30,61-70,31-40,71-80";
   }

The 12.1.0.2 database:

  • has been started using this cgroup thanks to the processor_group_name parameter (so that the database is linked to 2 NUMA nodes:  Nodes 2 and 3).
  • uses use_large_pages set to ONLY.
  • uses “_enable_NUMA_interleave” set to its default value: TRUE.
  • uses “_enable_NUMA_support” set to its default value: FALSE.

First test: The Instance has been started with a SGA that could fully fit into one NUMA node.

Let’s check the memory distribution:

> sh ./numa_memory.sh
                MemTotal         MemFree         MemUsed           Shmem      HugePages_Total       HugePages_Free       HugePages_Surp
Node_0      136052736_kB     48999132_kB     87053604_kB        77720_kB                39300                39044                    0
Node_1      136052736_kB     50504752_kB     85547984_kB        77748_kB                39300                39044                    0
Node_2      136052736_kB     17903752_kB    118148984_kB        78224_kB                39300                23427                    0
Node_3      136052736_kB     47511596_kB     88541140_kB        77736_kB                39300                39045                    0
Node_4      136024568_kB     43282316_kB     92742252_kB        77796_kB                39300                39045                    0
Node_5      136052736_kB     50345492_kB     85707244_kB        77804_kB                39300                39045                    0
Node_6      136052736_kB     31581004_kB    104471732_kB        77988_kB                39299                39044                    0
Node_7      136052736_kB     48516964_kB     87535772_kB        77784_kB                39299                39044                    0

As you can see, 39300-23427 large pages have been allocated from node 2 only. This is the amount of large pages needed for the SGA. So we can conclude that the memory is not evenly distributed across NUMA nodes 2 et 3.

Second test: The Instance has been started with a SGA that can not fully fit into one NUMA node.

Let’s check the memory distribution:

> sh ./numa_memory.sh
                MemTotal         MemFree         MemUsed           Shmem      HugePages_Total       HugePages_Free       HugePages_Surp
Node_0      136052736_kB     51332440_kB     84720296_kB        77832_kB                39300                39044                    0
Node_1      136052736_kB     52532564_kB     83520172_kB        77788_kB                39300                39044                    0
Node_2      136052736_kB     51669192_kB     84383544_kB        77892_kB                39300                    0                    0
Node_3      136052736_kB     52089448_kB     83963288_kB        77860_kB                39300                25864                    0
Node_4      136024568_kB     51992248_kB     84032320_kB        77876_kB                39300                39045                    0
Node_5      136052736_kB     52571468_kB     83481268_kB        77856_kB                39300                39045                    0
Node_6      136052736_kB     52131912_kB     83920824_kB        77844_kB                39299                39044                    0
Node_7      136052736_kB     52268200_kB     83784536_kB        77832_kB                39299                39044                    0

As you can see, the large pages have been allocated from nodes 2 and 3 but not evenly (as there is no more free large pages on node 2 while there is still about 25000 free on node 3).

Remarks

  • I observed the same with or without ASMM.
  • With no large pages (use_large_pages=FALSE) and no ASMM, I can observe this memory distribution:
> sh ./numa_memory.sh
                MemTotal         MemFree         MemUsed           Shmem      HugePages_Total       HugePages_Free       HugePages_Surp
Node_0      136052736_kB     48994024_kB     87058712_kB        77712_kB                39300                39044                    0
Node_1      136052736_kB     50497216_kB     85555520_kB        77752_kB                39300                39044                    0
Node_2      136052736_kB      9225964_kB    126826772_kB      8727192_kB                39300                39045                    0
Node_3      136052736_kB     38986380_kB     97066356_kB      8710796_kB                39300                39045                    0
Node_4      136024568_kB     43279124_kB     92745444_kB        77792_kB                39300                39045                    0
Node_5      136052736_kB     50341284_kB     85711452_kB        77796_kB                39300                39045                    0
Node_6      136052736_kB     31570200_kB    104482536_kB        78000_kB                39299                39044                    0
Node_7      136052736_kB     48505716_kB     87547020_kB        77776_kB                39299                39044                    0

As you can see the SGA has been evenly distributed across NUMA nodes 2 et 3 (see the Shmem column). I did not do more tests (means with ASMM or with AMM or…) with non large pages as not using large pages is not an option for me.

  • With large page and no processor_group_name set, the memory allocation looks like:
> sh ./numa_memory.sh
                MemTotal         MemFree         MemUsed           Shmem      HugePages_Total       HugePages_Free       HugePages_Surp
Node_0      136052736_kB     51234984_kB     84817752_kB        77824_kB                39300                37094                    0
Node_1      136052736_kB     52417252_kB     83635484_kB        77772_kB                39300                37095                    0
Node_2      136052736_kB     51178872_kB     84873864_kB        77904_kB                39300                37096                    0
Node_3      136052736_kB     52263300_kB     83789436_kB        77872_kB                39300                37096                    0
Node_4      136024568_kB     51870848_kB     84153720_kB        77864_kB                39300                37097                    0
Node_5      136052736_kB     52304932_kB     83747804_kB        77852_kB                39300                37097                    0
Node_6      136052736_kB     51837040_kB     84215696_kB        77840_kB                39299                37096                    0
Node_7      136052736_kB     52213888_kB     83838848_kB        77860_kB                39299                37096                    0

As you can see the large pages have been evenly allocated from all the NUMA nodes (this is what has been told in the introduction).

  • If you like it, you can get the simple num_memory.sh script from here.

Summary

  1. With large pages in place and processor_group_name linked to more than one NUMA node, then the SGA is not evenly distributed across the NUMA nodes defined in the cgroup.
  2. With large pages in place and processor_group_name not set,  then the SGA is evenly distributed across the NUMA nodes.
Posted in Performance | 5 Comments

Modify on the fly the cgroup properties linked to the processor_group_name parameter

Introduction

I am preparing a blog post in which I’ll try to describe pros and cons of cpu binding (using the processor_group_name parameter) versus Instance caging.

Today’s comparison:

As you know you can change the cpu_count and resource_manager_plan parameters while the database Instance is up and running. Then, while the database Instance is up and running you can:

  • enable / disable Instance caging.
  • change the cpu_count value and as a consequence the cpu limitation (with the Instance caging already in place).

Fine, can we do the same with cpu binding? What is the effect of changing the cgroup attributes that are linked to the processor_group_name parameter while the database Instance is up and running?

Let’s test it.

Initial setup

The database version is 12.1.0.2 and the  _enable_NUMA_support parameter has been kept to its default value: FALSE.

With this NUMA setup on a single machine:

NUMA node0 CPU(s):     1-10,41-50
NUMA node1 CPU(s):     11-20,51-60
NUMA node2 CPU(s):     21-30,61-70
NUMA node3 CPU(s):     31-40,71-80
NUMA node4 CPU(s):     0,81-89,120-129
NUMA node5 CPU(s):     90-99,130-139
NUMA node6 CPU(s):     100-109,140-149
NUMA node7 CPU(s):     110-119,150-159

A cgroup (named oracle) has been created with those properties:

group oracle {
   perm {
     task {
       uid = oracle;
       gid = dc_dba;
     }
     admin {
       uid = oracle;
       gid = dc_dba;
     }
   }
   cpuset {
     cpuset.mems="0";
     cpuset.cpus="1-6";
   }
}

Means that I want the SGA to be allocated on NUMA node 0 and to use the cpus 1 to 6.

The database has been started using this cgroup thanks to the processor_group_name parameter (The database has to be restarted):

SQL> alter system set processor_group_name='oracle' scope=spfile sid='*';

System altered.

During the database startup you can see the following into the alert.log file:

Instance started in processor group oracle (NUMA Nodes: 0 CPUs: 1-6)

You can also check that the SGA has been allocated from NUMA node 0:

> grep -i HugePages /sys/devices/system/node/node*/meminfo
/sys/devices/system/node/node0/meminfo:Node 0 HugePages_Total: 39303
/sys/devices/system/node/node0/meminfo:Node 0 HugePages_Free:  23589
/sys/devices/system/node/node0/meminfo:Node 0 HugePages_Surp:      0
/sys/devices/system/node/node1/meminfo:Node 1 HugePages_Total: 39302
/sys/devices/system/node/node1/meminfo:Node 1 HugePages_Free:  39046
/sys/devices/system/node/node1/meminfo:Node 1 HugePages_Surp:      0
/sys/devices/system/node/node2/meminfo:Node 2 HugePages_Total: 39302
/sys/devices/system/node/node2/meminfo:Node 2 HugePages_Free:  39047
/sys/devices/system/node/node2/meminfo:Node 2 HugePages_Surp:      0
/sys/devices/system/node/node3/meminfo:Node 3 HugePages_Total: 39302
/sys/devices/system/node/node3/meminfo:Node 3 HugePages_Free:  39047
/sys/devices/system/node/node3/meminfo:Node 3 HugePages_Surp:      0
/sys/devices/system/node/node4/meminfo:Node 4 HugePages_Total: 39302
/sys/devices/system/node/node4/meminfo:Node 4 HugePages_Free:  39047
/sys/devices/system/node/node4/meminfo:Node 4 HugePages_Surp:      0
/sys/devices/system/node/node5/meminfo:Node 5 HugePages_Total: 39302
/sys/devices/system/node/node5/meminfo:Node 5 HugePages_Free:  39047
/sys/devices/system/node/node5/meminfo:Node 5 HugePages_Surp:      0
/sys/devices/system/node/node6/meminfo:Node 6 HugePages_Total: 39302
/sys/devices/system/node/node6/meminfo:Node 6 HugePages_Free:  39047
/sys/devices/system/node/node6/meminfo:Node 6 HugePages_Surp:      0
/sys/devices/system/node/node7/meminfo:Node 7 HugePages_Total: 39302
/sys/devices/system/node/node7/meminfo:Node 7 HugePages_Free:  39047
/sys/devices/system/node/node7/meminfo:Node 7 HugePages_Surp:      0

As you can see, 39303-23589 large pages have been allocated from node 0. This is the amount of large pages needed for the SGA.

We can check that the cpu_count parameter has been automatically set to 6:

SQL> select value from v$parameter where name='cpu_count';

VALUE
--------------------------------------------------------------------------------
6

We can also check the cpu affinity of a background process (smon for example):

> taskset -c -p `ps -ef | grep -i smon | grep BDT12CG | awk '{print $2}'`
pid 1019968's current affinity list: 1-6

Note that the same affinity would have been observed with a foreground process.

Now let’s change the cpuset.cpus attribute while the database instance is up and running:

I want the cpus 11 to 20 to be used (instead of 1 to 6):

> /bin/echo 11-20 > /cgroup/cpuset/oracle/cpuset.cpus

After a few seconds, those lines appear into the alert.log file:

Detected change in CPU count to 10

Great, it looks like the database is aware of the new cpuset.cpus value. Let’s check the smon process cpu affinity :

> taskset -c -p `ps -ef | grep -i smon | grep BDT12CG | awk '{print $2}'`
pid 1019968's current affinity list: 11-20

BINGO: The database is now using the new value (means the new cpus) and we don’t need to bounce it!

Remarks:

  • Depending of the new cpuset.cpus value the SGA allocation may not be “optimal” (If the cpus are not on the same NUMA node where the SGA is currently allocated).
  • Don’t forget to update the /etc/cgconfig.conf file accordingly (If not, the restart of the cgconfig service will overwrite your changes).
  • I did the same change during a full cached SLOB run of 9 readers, and the mpstat -P ALL 3 output moved from:
02:21:54 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
02:21:57 PM  all    3.79    0.01    0.21    0.01    0.00    0.00    0.00    0.00   95.97
02:21:57 PM    0    0.34    0.00    0.00    0.00    0.00    0.00    0.00    0.00   99.66
02:21:57 PM    1   98.66    0.00    1.34    0.00    0.00    0.00    0.00    0.00    0.00
02:21:57 PM    2   98.67    0.00    1.33    0.00    0.00    0.00    0.00    0.00    0.00
02:21:57 PM    3   92.31    0.00    7.69    0.00    0.00    0.00    0.00    0.00    0.00
02:21:57 PM    4   97.00    0.00    3.00    0.00    0.00    0.00    0.00    0.00    0.00
02:21:57 PM    5   98.34    0.00    1.66    0.00    0.00    0.00    0.00    0.00    0.00
02:21:57 PM    6   99.33    0.00    0.67    0.00    0.00    0.00    0.00    0.00    0.00
.
.
.
02:21:57 PM   11    1.34    1.68    0.34    0.00    0.00    0.00    0.00    0.00   96.64
02:21:57 PM   12    0.67    0.00    0.00    0.34    0.00    0.00    0.00    0.00   98.99
02:21:57 PM   13    0.33    0.00    0.00    0.00    0.00    0.00    0.00    0.00   99.67
02:21:57 PM   14    0.33    0.00    0.33    0.00    0.00    0.00    0.00    0.00   99.33
02:21:57 PM   15    0.34    0.00    0.34    0.00    0.00    0.00    0.00    0.00   99.33
02:21:57 PM   16    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
02:21:57 PM   17    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
02:21:57 PM   18    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
02:21:57 PM   19    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
02:21:57 PM   20    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00

to:

02:22:00 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
02:22:03 PM  all    5.71    0.00    0.18    0.02    0.00    0.00    0.00    0.00   94.09
02:22:03 PM    0    0.33    0.00    0.00    0.00    0.00    0.33    0.00    0.00   99.33
02:22:03 PM    1    0.34    0.00    0.00    0.00    0.00    0.00    0.00    0.00   99.66
02:22:03 PM    2    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
02:22:03 PM    3    2.70    0.00    6.08    0.00    0.00    0.00    0.00    0.00   91.22
02:22:03 PM    4    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
02:22:03 PM    5    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
02:22:03 PM    6    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
.
.
.
02:22:03 PM   11    2.05    0.00    1.71    0.34    0.00    0.00    0.00    0.00   95.89
02:22:03 PM   12   99.00    0.00    1.00    0.00    0.00    0.00    0.00    0.00    0.00
02:22:03 PM   13   99.33    0.00    0.67    0.00    0.00    0.00    0.00    0.00    0.00
02:22:03 PM   14   99.00    0.00    1.00    0.00    0.00    0.00    0.00    0.00    0.00
02:22:03 PM   15   97.32    0.00    2.68    0.00    0.00    0.00    0.00    0.00    0.00
02:22:03 PM   16   98.67    0.00    1.33    0.00    0.00    0.00    0.00    0.00    0.00
02:22:03 PM   17   99.33    0.00    0.67    0.00    0.00    0.00    0.00    0.00    0.00
02:22:03 PM   18   99.00    0.00    1.00    0.00    0.00    0.00    0.00    0.00    0.00
02:22:03 PM   19   99.00    0.00    1.00    0.00    0.00    0.00    0.00    0.00    0.00
02:22:03 PM   20   99.67    0.00    0.33    0.00    0.00    0.00    0.00    0.00    0.00

As you can see the process activity moved from cpus 1-6 to cpus 11-20: This is exactly what we want.

Now, out of curiosity: what about the memory? let’s change the cpuset.mems property while the database is up and running:

Let’s say that we want the SGA to migrate from NUMA node 0 to node 1.

To achieve this we also need to set the cpuset memory_migrate attribute to 1 (default is 0: means memory allocation is not following cpuset.mems update).

So, let’s change the memory_migrate attribute to 1 and cpuset.mems from node 0 to node 1:

/bin/echo 1 > /cgroup/cpuset/oracle/cpuset.memory_migrate
/bin/echo 1 > /cgroup/cpuset/oracle/cpuset.mems

What is the effect on the SGA allocation?

grep -i HugePages /sys/devices/system/node/node*/meminfo               
/sys/devices/system/node/node0/meminfo:Node 0 HugePages_Total: 39303
/sys/devices/system/node/node0/meminfo:Node 0 HugePages_Free:  23589
/sys/devices/system/node/node0/meminfo:Node 0 HugePages_Surp:      0
/sys/devices/system/node/node1/meminfo:Node 1 HugePages_Total: 39302
/sys/devices/system/node/node1/meminfo:Node 1 HugePages_Free:  39046
/sys/devices/system/node/node1/meminfo:Node 1 HugePages_Surp:      0
/sys/devices/system/node/node2/meminfo:Node 2 HugePages_Total: 39302
/sys/devices/system/node/node2/meminfo:Node 2 HugePages_Free:  39047
/sys/devices/system/node/node2/meminfo:Node 2 HugePages_Surp:      0
/sys/devices/system/node/node3/meminfo:Node 3 HugePages_Total: 39302
/sys/devices/system/node/node3/meminfo:Node 3 HugePages_Free:  39047
/sys/devices/system/node/node3/meminfo:Node 3 HugePages_Surp:      0
/sys/devices/system/node/node4/meminfo:Node 4 HugePages_Total: 39302
/sys/devices/system/node/node4/meminfo:Node 4 HugePages_Free:  39047
/sys/devices/system/node/node4/meminfo:Node 4 HugePages_Surp:      0
/sys/devices/system/node/node5/meminfo:Node 5 HugePages_Total: 39302
/sys/devices/system/node/node5/meminfo:Node 5 HugePages_Free:  39047
/sys/devices/system/node/node5/meminfo:Node 5 HugePages_Surp:      0
/sys/devices/system/node/node6/meminfo:Node 6 HugePages_Total: 39302
/sys/devices/system/node/node6/meminfo:Node 6 HugePages_Free:  39047
/sys/devices/system/node/node6/meminfo:Node 6 HugePages_Surp:      0
/sys/devices/system/node/node7/meminfo:Node 7 HugePages_Total: 39302
/sys/devices/system/node/node7/meminfo:Node 7 HugePages_Free:  39047
/sys/devices/system/node/node7/meminfo:Node 7 HugePages_Surp:      0

None: as you can see the SGA is still allocated on node 0 (I observed the same during the full cached SLOB run).

Then, let’s try to stop the database, add memory_migrate attribute into the /etc/cgconfig.conf file that way:

   cpuset {
     cpuset.mems="0";
     cpuset.cpus="1-6";
     cpuset.memory_migrate=1;
   }

restart the cgconfig service:

> service cgconfig restart

and check the value:

> cat /cgroup/cpuset/oracle/cpuset.memory_migrate
1

Well it is set to 1, let’s start the database and check again:

> cat /cgroup/cpuset/oracle/cpuset.memory_migrate
0

Hey! It has been set back to 0 after the database startup. So, let’s conclude that, by default, the memory can not be dynamically migrated from one NUMA node to another by changing on the fly the cpuset.mems value.

Remark:

Obviously, changing the cpuset.mems attribute from node 0 to node 1:

> /bin/echo 1 > /cgroup/cpuset/oracle/cpuset.mems

and bounce the Instance will allocate the memory on the new node. Again, don’t forget to update the /etc/cgconfig.conf file accordingly (If not, the restart of the cgconfig service will overwrite your changes).

Conclusion:

By changing the cpuset.cpus value:

  • We have been able to change the cpus affinity dynamically (without database Instance restart).

By changing the cpuset.mems value:

  • We have not been able to change the memory node dynamically (an Instance restart is needed).

The main goal is reached: With cpu binding already in place (using processor_group_name) we are able to change on the fly the number of cpus a database is limited to use.

Posted in Performance | 1 Comment

cpu binding (processor_group_name) vs Instance caging comparison during LIO pressure

Introduction

Suppose that I am trying to consolidate multiple databases on to a single server and that I would like to limit the cpu usage of one of the database. Then I decide to test the cpu binding approach and the Instance caging approach.

You can find a great description and how to implement:

During those tests I would like to compare the cpu binding and the Instance caging performance during Logical IO pressure (By pressure I mean that the database would need more cpu resources than the ones it is limited to use).

Tests setup

The tests will be performed on a single machine and this NUMA configuration:

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                160
On-line CPU(s) list:   0-159
Thread(s) per core:    2
Core(s) per socket:    10
Socket(s):             8
NUMA node(s):          8
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 47
Stepping:              2
CPU MHz:               1995.926
BogoMIPS:              3991.60
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              24576K
NUMA node0 CPU(s):     1-10,41-50
NUMA node1 CPU(s):     11-20,51-60
NUMA node2 CPU(s):     21-30,61-70
NUMA node3 CPU(s):     31-40,71-80
NUMA node4 CPU(s):     0,81-89,120-129
NUMA node5 CPU(s):     90-99,130-139
NUMA node6 CPU(s):     100-109,140-149
NUMA node7 CPU(s):     110-119,150-159

and this cpu topology (the script comes from Karl Arao’s cputoolkit here):

> sh ./cpu_topology
model name      : Intel(R) Xeon(R) CPU E7- 4850  @ 2.00GHz
processors  (OS CPU count)          0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159
physical id (processor socket)      0 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3
siblings    (logical CPUs/socket)   20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
core id     (# assigned to a core)  0 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25 0 1 2 8 9 16 17 18 24 25
cpu cores   (physical cores/socket) 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10

For the tests I’ll use SLOB and apply some parameters at the instance level for Logical IO testing:

> cat lio_init.sql
alter system reset "_db_block_prefetch_limit" scope=spfile sid='*';
alter system reset "_db_block_prefetch_quota"  scope=spfile sid='*';
alter system reset "_db_file_noncontig_mblock_read_count" scope=spfile sid='*';
alter system reset "cpu_count" scope=spfile sid='*';
alter system set "db_cache_size"=20g scope=spfile sid='*';
alter system set "shared_pool_size"=10g scope=spfile sid='*';
alter system set "sga_target"=0 scope=spfile sid='*';

I also have to say that the _enable_NUMA_support parameter has been kept to its default value: FALSE.

Then, for accurate comparison, each test will be based on the same SLOB configuration and workload: 9 readers and fix run time of 5 minutes. I’ll run SLOB 3 times per test and then I’ll compare how many logical reads have been done on average per test for all the 3 runs.

TEST 1: Just out of curiosity, launch the SLOB runs without cpu binding and without Instance caging (without LIO pressure)

With 9 readers, the average result is the following:

Load Profile                    Per Second   Per Transaction  Per Exec  Per Call
~~~~~~~~~~~~~~~            ---------------   --------------- --------- ---------
             DB Time(s):               8.9              58.8      0.00      3.66
              DB CPU(s):               8.9              58.7      0.00      3.65
  Logical read (blocks):       6,323,758.9      41,668,897.2



Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                           Total Wait       Wait   % DB Wait
Event                                Waits Time (sec)    Avg(ms)   time Class
------------------------------ ----------- ---------- ---------- ------ --------
DB CPU                                         2700.6              99.8

As you can see during those 5 minutes run time, SLOB generated about 6 300 000 logical reads per second. Those are also Full cached SLOB runs as the top event is “DB CPU” for about 100% of the DB time.

TEST 2: Instance caging in place and LIO pressure

As I am using 9 SLOB readers, I’ll set the cpu_count parameter to 6 (to produce LIO pressure).

SQL> alter system set cpu_count=6;

System altered.

SQL> alter system set resource_manager_plan='default_plan';

System altered.

The average result is the following:

Load Profile                    Per Second   Per Transaction  Per Exec  Per Call
~~~~~~~~~~~~~~~            ---------------   --------------- --------- ---------
             DB Time(s):               8.9              64.4      0.00      3.86
              DB CPU(s):               5.4              39.0      0.00      2.34
  Logical read (blocks):       3,495,618.2      25,222,299.9



Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                           Total Wait       Wait   % DB Wait
Event                                Waits Time (sec)    Avg(ms)   time Class
------------------------------ ----------- ---------- ---------- ------ --------
DB CPU                                         1638.6              60.6
resmgr:cpu quantum                  16,005     1065.4      66.57   39.4 Schedule

As you can see during those 5 minutes run time, SLOB generated about 3 500 000 logical reads per second. Those are also Full cached SLOB runs as the top events are “DB CPU” and “resmgr:cpu quantum” (that is linked to the Instance caging) for about 100% of the DB time.

TEST 3: cpu binding in place and LIO pressure

As I am using 9 SLOB readers, I’ll set 6 cpus in the “oracle” cgroup (to compare with the previous test and to produce LIO pressure) that way:

   cpuset {
     cpuset.mems="0";
     cpuset.cpus="1-6";
   }

remove caging and set processor_group_name to “oracle”:

SQL> alter system reset cpu_count scope=spfile sid='*';

System altered.

SQL> alter system reset resource_manager_plan scope=spfile sid='*';

System altered.

SQL> alter system set processor_group_name='oracle' scope=spfile sid='*';

System altered.

The average result is the following:

Load Profile                    Per Second   Per Transaction  Per Exec  Per Call
~~~~~~~~~~~~~~~            ---------------   --------------- --------- ---------
             DB Time(s):               8.9              64.4      0.00      3.70
              DB CPU(s):               5.9              42.3      0.00      2.43
  Logical read (blocks):       5,786,023.5      41,714,611.9



Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                           Total Wait       Wait   % DB Wait
Event                                Waits Time (sec)    Avg(ms)   time Class
------------------------------ ----------- ---------- ---------- ------ --------
DB CPU                                         1775.1              65.6
db file sequential read                578        1.5       2.64     .1 User I/O
latch free                             213          1       4.80     .0 Other
.
.
.

As you can see during those 5 minutes run time:

  • SLOB generated about 5 800 000 logical reads per second.
  • The sum of the “% DB WAIT time” are less than 100% (“DB CPU” represents 65.6% of it).

This last observation is due to the fact that “CPU WAIT” is not included into “DB CPU“. OEM gives a clear picture of it:

new_cg6_for_blog

Remarks:

  1. You can find more details about “DB CPU” and “CPU WAIT” into this Doug Burns’s post.
  2. Kyle Hailey provided a sql to measure the “DB CPU” and “CPU WAIT” into this blog post.

That said, I can conclude that about 100% of the DB Time has been spent into the CPU or the run queue: Then those are also Full cached SLOB runs.

BUT WAIT: The previous test (cpu binding) has an advantage compare to the Instance caging one: The SGA and the cpus have been allocated from the same NUMA node (See the beginning of this post), while the SGA is spread across NUMA nodes in the Instance caging test. And it matters: See this post for more details.

Then I need to setup a new test: Create a cgroup with 9 cpus on the same NUMA node as the SGA, set the caging to 6 and start the instance on this cgroup.

That way, I’ll be able to compare Apple and Apple, means compare:

  • TEST 3: cpu binding in place and LIO pressure
  • TEST 4: cpu binding and Instance caging in place with LIO pressure due to the caging

Let’s do it:

TEST 4: cpu binding and Instance caging in place with LIO pressure due to the caging

As I am using 9 SLOB readers, I’ll set 9 cpus in the “oracle” cgroup, put the caging to 6 (to compare with the previous test and to produce LIO pressure) that way:

SQL> alter system set cpu_count=6 ;

System altered.

SQL> alter system set resource_manager_plan='default_plan';

System altered.

SQL> alter system set processor_group_name='oracle' scope=spfile sid='*';

System altered.

with the “oracle” cgroup setup that way (cpus and SGA allocated on the same NUMA node):

   cpuset {
     cpuset.mems="0";
     cpuset.cpus="1-9";
   }

The average result is the following:

Load Profile                    Per Second   Per Transaction  Per Exec  Per Call
~~~~~~~~~~~~~~~            ---------------   --------------- --------- ---------
             DB Time(s):               8.9              58.8      0.00      3.81
              DB CPU(s):               5.4              35.4      0.00      2.29
  Logical read (blocks):       5,182,622.4      34,126,892.6



Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                           Total Wait       Wait   % DB Wait
Event                                Waits Time (sec)    Avg(ms)   time Class
------------------------------ ----------- ---------- ---------- ------ --------
DB CPU                                         1627.8              60.2
resmgr:cpu quantum                  16,190     1064.9      65.78   39.4 Schedule

As you can see during those 5 minutes run time, SLOB generated about 5 100 000 logical reads per second. Those are also Full cached SLOB runs as the top events are “DB CPU” and “resmgr:cpu quantum” (that is linked to the Instance caging) for about 100% of the DB time.

Last test, just out of curiosity to compare with TEST 1:

TEST 5: Just out of curiosity, launch the SLOB runs with cpu binding and without Instance caging (without LIO pressure)

As I am using 9 SLOB readers, I’ll set 9 cpus in the “oracle” cgroup without caging  (to compare with the test 1 and not to produce LIO pressure).

The average result is the following:

Load Profile                    Per Second   Per Transaction  Per Exec  Per Call
~~~~~~~~~~~~~~~            ---------------   --------------- --------- ---------
             DB Time(s):               9.0              64.5      0.00      2.79
              DB CPU(s):               8.8              63.8      0.00      2.75
  Logical read (blocks):       8,623,444.6      62,178,526.0



Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                           Total Wait       Wait   % DB Wait
Event                                Waits Time (sec)    Avg(ms)   time Class
------------------------------ ----------- ---------- ---------- ------ --------
DB CPU                                         2677.3              98.8

As you can see during those 5 minutes run time, SLOB generated about 8 600 000 logical reads per second. Those are also Full cached SLOB runs as the top event is “DB CPU” with about 100% of the DB time.

Conclusion:

The ranking is the following (Logical IO per seconds in descending order):

  1. cpu binding (without LIO pressure):  8 600 000 logical reads per second.
  2. without anything (without LIO pressure): 6 300 000 logical reads per second.
  3. cpu binding only and LIO pressure: 5 800 000 logical reads per second.
  4. cpu binding and Instance caging (LIO pressure due to the caging): 5 100 000 logical reads per second.
  5. Instance caging only and LIO pressure: 3 500 000 logical reads per second.

Only ranks 3, 4 and 5 are of interest regarding the initial goal of this post. And only 3 and 4 are valuable comparison.

So, comparing 3 and 4 ranks, I can conclude that the Instance caging has been less performant (compare to the cpu binding) by about 10% during LIO pressure (by pressure I mean that the database needs more cpu resources than the ones it is limited to use). It makes sense to me as the caging is managed by the oracle database software and then needs some overhead.

Now that I have compared cpu binding and Instance caging from a performance point of view, I’ll discuss pros and cons of both approach into another blog post.

Useful link related to the oracle database and CPU interaction:

Posted in Performance | 1 Comment

Measure the impact of remote versus local NUMA node access thanks to processor_group_name

Introduction

Starting with oracle database 12c you can use the processor_group_name parameter to instruct the database instance to run itself within the specified operating system processor group. You can find more detail about this feature into Nikolay Manchev’s blog post.

So basically, once the libcgroup package has been installed:

> yum install libcgroup

you can create an “oracle” (for example) group by editing the /etc/cgconfig.conf file so that it looks like:

> cat /etc/cgconfig.conf 
mount {
        cpuset  = /cgroup/cpuset;
        cpu     = /cgroup/cpu;
        cpuacct = /cgroup/cpuacct;
        memory  = /cgroup/memory;
        devices = /cgroup/devices;
        freezer = /cgroup/freezer;
        net_cls = /cgroup/net_cls;
        blkio   = /cgroup/blkio;
}

group oracle {
   perm {
     task {
       uid = oracle;
       gid = dba;
     }
     admin {
       uid = oracle;
       gid = dba;
     }
   }
   cpuset {
     cpuset.mems="0";
     cpuset.cpus="1-9";
   }
}

start the cgconfig service:

> service cgconfig start

And then set the processor_group_name instance parameter to the group (“oracle” in our example).

The interesting part is that to setup the group we have to use 2 mandatory parameters into the cpuset:

  • cpuset.mems: specifies the memory nodes that tasks in this cgroup are permitted to access.
  • cpuset.cpus: specifies the CPUs that tasks in this cgroup are permitted to access.

So, it means that with processor_group_name set:

  • The SGA will be allocated according to the cpuset.mems parameter.
  • The processes will be bind to the CPUs according to the cpuset.cpus parameter.

Then, we can measure the impact of remote versus local NUMA node access thanks to the processor_group_name parameter: Let’s do it.

Tests setup

My NUMA configuration is the following:

> numactl --hardware
available: 8 nodes (0-7)
node 0 cpus: 1 2 3 4 5 6 7 8 9 10 41 42 43 44 45 46 47 48 49 50
node 0 size: 132864 MB
node 0 free: 45326 MB
node 1 cpus: 11 12 13 14 15 16 17 18 19 20 51 52 53 54 55 56 57 58 59 60
node 1 size: 132864 MB
node 1 free: 46136 MB
node 2 cpus: 21 22 23 24 25 26 27 28 29 30 61 62 63 64 65 66 67 68 69 70
node 2 size: 132864 MB
node 2 free: 40572 MB
node 3 cpus: 31 32 33 34 35 36 37 38 39 40 71 72 73 74 75 76 77 78 79 80
node 3 size: 132864 MB
node 3 free: 47145 MB
node 4 cpus: 0 81 82 83 84 85 86 87 88 89 120 121 122 123 124 125 126 127 128 129
node 4 size: 132836 MB
node 4 free: 40573 MB
node 5 cpus: 90 91 92 93 94 95 96 97 98 99 130 131 132 133 134 135 136 137 138 139
node 5 size: 132864 MB
node 5 free: 45564 MB
node 6 cpus: 100 101 102 103 104 105 106 107 108 109 140 141 142 143 144 145 146 147 148 149
node 6 size: 132864 MB
node 6 free: 20592 MB
node 7 cpus: 110 111 112 113 114 115 116 117 118 119 150 151 152 153 154 155 156 157 158 159
node 7 size: 132864 MB
node 7 free: 44170 MB
node distances:
node   0   1   2   3   4   5   6   7 
  0:  10  13  12  12  22  22  22  22 
  1:  13  10  12  12  22  22  22  22 
  2:  12  12  10  13  22  22  22  22 
  3:  12  12  13  10  22  22  22  22 
  4:  22  22  22  22  10  13  12  12 
  5:  22  22  22  22  13  10  12  12 
  6:  22  22  22  22  12  12  10  13 
  7:  22  22  22  22  12  12  13  10

As you can see:

  • CPUs 1 to 9 are located into node 0.
  • node 0 is far from node 7 (distance equals 22).

Then I am able to test local NUMA node access by creating the group with:

  • cpuset.cpus=”1-9″
  • cpuset.mems=”0″

As CPUs 1-9 are located into node 0 and the SGA will be created into node 0.

and I am able to test remote NUMA node access by creating the group with:

  • cpuset.mems=”7″
  • cpuset.cpus=”1-9″

As CPUs 1-9 are located into node 0 while the SGA will be created into node 7.

To test the effect of remote versus local NUMA node access, I’ll use SLOB. From my point of view this is the perfect/mandatory tool to test the effect of NUMA from the oracle side.

For the tests I created 8 SLOB schemas (with SCALE=10000, so 80 MB), applied some parameters at the instance level for Logical IO testing (as we need FULL cached SLOB to test NUMA efficiently):

> cat lio_init.sql
alter system reset "_db_block_prefetch_limit" scope=spfile sid='*';
alter system reset "_db_block_prefetch_quota"  scope=spfile sid='*';
alter system reset "_db_file_noncontig_mblock_read_count" scope=spfile sid='*';
alter system reset "cpu_count" scope=spfile sid='*';
alter system set "db_cache_size"=20g scope=spfile sid='*';
alter system set "shared_pool_size"=10g scope=spfile sid='*';
alter system set "sga_target"=0 scope=spfile sid='*';

and set processor_group_name to oracle:

SQL> alter system set processor_group_name='oracle' scope=spfile sid='*';

For accurate comparison, each test will be based on the same SLOB configuration and workload: 8 readers and fix run time of 3 minutes. Then I’ll compare how many logical reads have been done in both cases.

Test 1: Measure local NUMA node access:

As explained previously, I do it by updating /etc/cgconfig.conf file that way:

  • cpuset.cpus=”1-9″
  • cpuset.mems=”0″

re-start the service:

> service cgconfig restart

and the database instance.

Check that the SGA is allocated into node 0:

> grep -i HugePages /sys/devices/system/node/node*/meminfo

/sys/devices/system/node/node0/meminfo:Node 0 HugePages_Total: 39303
/sys/devices/system/node/node0/meminfo:Node 0 HugePages_Free:  23589
/sys/devices/system/node/node0/meminfo:Node 0 HugePages_Surp:      0
/sys/devices/system/node/node1/meminfo:Node 1 HugePages_Total: 39302
/sys/devices/system/node/node1/meminfo:Node 1 HugePages_Free:  39046
/sys/devices/system/node/node1/meminfo:Node 1 HugePages_Surp:      0
/sys/devices/system/node/node2/meminfo:Node 2 HugePages_Total: 39302
/sys/devices/system/node/node2/meminfo:Node 2 HugePages_Free:  39047
/sys/devices/system/node/node2/meminfo:Node 2 HugePages_Surp:      0
/sys/devices/system/node/node3/meminfo:Node 3 HugePages_Total: 39302
/sys/devices/system/node/node3/meminfo:Node 3 HugePages_Free:  39047
/sys/devices/system/node/node3/meminfo:Node 3 HugePages_Surp:      0
/sys/devices/system/node/node4/meminfo:Node 4 HugePages_Total: 39302
/sys/devices/system/node/node4/meminfo:Node 4 HugePages_Free:  39047
/sys/devices/system/node/node4/meminfo:Node 4 HugePages_Surp:      0
/sys/devices/system/node/node5/meminfo:Node 5 HugePages_Total: 39302
/sys/devices/system/node/node5/meminfo:Node 5 HugePages_Free:  39047
/sys/devices/system/node/node5/meminfo:Node 5 HugePages_Surp:      0
/sys/devices/system/node/node6/meminfo:Node 6 HugePages_Total: 39302
/sys/devices/system/node/node6/meminfo:Node 6 HugePages_Free:  39047
/sys/devices/system/node/node6/meminfo:Node 6 HugePages_Surp:      0
/sys/devices/system/node/node7/meminfo:Node 7 HugePages_Total: 39302
/sys/devices/system/node/node7/meminfo:Node 7 HugePages_Free:  39047
/sys/devices/system/node/node7/meminfo:Node 7 HugePages_Surp:      0

As you can see, 39303-23589 large pages have been allocated from node 0. This is the amount of large pages needed for my SGA.

Then I launched SLOB 2 times with the following results:

Load Profile                    	Per Second   Per Transaction  Per Exec  Per Call
~~~~~~~~~~~~~~~            		---------------   --------------- --------- ---------
Run 1:  Logical read (blocks):       	4,551,737.9      52,878,393.1
Run 2: 	Logical read (blocks):   	4,685,712.3      18,984,632.1


                                           	Total Wait       Wait   % DB Wait
Event                                	Waits 	Time (sec)    Avg(ms)   time Class
------------------------------    ----------- 	---------- ---------- ------ --------
Run 1: DB CPU                                       1423.2              97.8
Run 2: DB CPU                                       1431.4              99.2

As you can see, during the 3 minutes run time, SLOB generated about 4 500 000 logical reads per second with local NUMA node access in place. Those are also FULL cached SLOB runs as the top event is “DB CPU” with about 100% of the DB time.

Test 2: Measure remote NUMA node access:

As explained previously, I do it by updating /etc/cgconfig.conf file that way:

  • cpuset.cpus=”1-9″
  • cpuset.mems=”7″

re-start the service:

> service cgconfig restart

and the database instance.

Check that the SGA is allocated into node 7:

> grep -i HugePages /sys/devices/system/node/node*/meminfo
/sys/devices/system/node/node0/meminfo:Node 0 HugePages_Total: 39303
/sys/devices/system/node/node0/meminfo:Node 0 HugePages_Free:  39047
/sys/devices/system/node/node0/meminfo:Node 0 HugePages_Surp:      0
/sys/devices/system/node/node1/meminfo:Node 1 HugePages_Total: 39302
/sys/devices/system/node/node1/meminfo:Node 1 HugePages_Free:  39046
/sys/devices/system/node/node1/meminfo:Node 1 HugePages_Surp:      0
/sys/devices/system/node/node2/meminfo:Node 2 HugePages_Total: 39302
/sys/devices/system/node/node2/meminfo:Node 2 HugePages_Free:  39047
/sys/devices/system/node/node2/meminfo:Node 2 HugePages_Surp:      0
/sys/devices/system/node/node3/meminfo:Node 3 HugePages_Total: 39302
/sys/devices/system/node/node3/meminfo:Node 3 HugePages_Free:  39047
/sys/devices/system/node/node3/meminfo:Node 3 HugePages_Surp:      0
/sys/devices/system/node/node4/meminfo:Node 4 HugePages_Total: 39302
/sys/devices/system/node/node4/meminfo:Node 4 HugePages_Free:  39047
/sys/devices/system/node/node4/meminfo:Node 4 HugePages_Surp:      0
/sys/devices/system/node/node5/meminfo:Node 5 HugePages_Total: 39302
/sys/devices/system/node/node5/meminfo:Node 5 HugePages_Free:  39047
/sys/devices/system/node/node5/meminfo:Node 5 HugePages_Surp:      0
/sys/devices/system/node/node6/meminfo:Node 6 HugePages_Total: 39302
/sys/devices/system/node/node6/meminfo:Node 6 HugePages_Free:  39047
/sys/devices/system/node/node6/meminfo:Node 6 HugePages_Surp:      0
/sys/devices/system/node/node7/meminfo:Node 7 HugePages_Total: 39302
/sys/devices/system/node/node7/meminfo:Node 7 HugePages_Free:  23557
/sys/devices/system/node/node7/meminfo:Node 7 HugePages_Surp:      0

As you can see, 39302-23557 large pages have been allocated from node 7. This is the amount of large pages needed for my SGA.

Then I launched SLOB 2 times with the following results:

Load Profile                    	Per Second   Per Transaction  Per Exec  Per Call
~~~~~~~~~~~~~~~            		---------------   --------------- --------- ---------
Run 1:  Logical read (blocks):       	2,133,879.6      24,991,464.4
Run 2: 	Logical read (blocks):   	2,203,307.5      10,879,098.7


                                           	Total Wait       Wait   % DB Wait
Event                                	Waits 	Time (sec)    Avg(ms)   time Class
------------------------------    ----------- 	---------- ---------- ------ --------
Run 1: DB CPU                                       1420.7              97.7
Run 2: DB CPU                                       1432.6              99.0

As you can see, during the 3 minutes run time, SLOB generated about 2 100 000 logical reads per second with remote NUMA node access in place. Those are also FULL cached SLOB runs as the top event is “DB CPU” with about 100% of the DB time.

This is about 2.15 times less than with the local NUMA node access. This is about the ratio of the distance between node 0 to node 7 and node 0 to node 0 (22/10).

Remarks:

  • The SGA is fully made of large pages as I set the use_large_pages parameter to “only”.
  • To check that only the CPUs 1 to 9 worked during the SLOB run you can read the mpstat.out file provided by SLOB at the end of the test.
  • To check that the oracle session running the SLOB run are bind to CPUs 1 to 9 you can use taskset:
> ps -ef | grep 641387
oracle    641387  641353 87 10:15 ?        00:00:55 oracleBDT12CG_1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
> taskset -c -p 641387
pid 641387's current affinity list: 1-9
  • To check that your SLOB Logical I/O test delivers the maximum, you can read this post.
  • With SLOB schema of 8 MB, I was not able to see this huge difference between remote and local NUMA node access. I guess this is due to the L3 cache of 24576K on the cpu used during my tests.

Conclusion:

  • Thanks to the 12c database parameter processor_group_name, we have been able to measure the impact of remote versus local NUMA node access from the database point of view.
  • If you plan the use the processor_group_name database parameter, then pay attention to configure the group correctly with cpuset.cpus and cpuset.mems to ensure local NUMA node access (My tests show 2.15 times less logical reads per second with my farest remote access).

Useful links related to NUMA:

Posted in Performance | 2 Comments

FAN callouts for In-Memory Instance distribution with RAC, PDB and preferred/available service

The context:

In my previous blog post: In-Memory Instance distribution with RAC and Multitenant Environment, I checked if we can influence (get rid of the default behaviour) how the In-Memory is distributed across all the Instances per PDB. I checked because:

  • By default all objects populated into memory will be distributed across all of the IM column stores in the cluster.
  • Christian Antognini show us into this blog post that having the data not fully populated on an Instance (default behaviour on non Engineered Systems) could lead to bad performance.

Let’s summarize the findings:

  • If the PDB is open on one Instance, then this Instance contains all the data.
  • If the PDB is open on all the Instances then we have to set “_inmemory_auto_distribute” to false to have at least one Instance that contains all the data.

The objective:

That said, let’s try to reach an objective: With a service defined as preferred/available on a PDB, I want the PDB to be open on only one Instance at any time (That way we don’t need to set the hidden parameter).

Into this blog post, Martin Bach already explained the PDB open/close behaviour with a service defined as preferred/available on it. I don’t want to repeat Martin’s work, I just want to draw your attention on the following (with our objective in mind):

The default PDB status is:

SYS@CDB$ROOT> select inst_id,open_mode from gv$pdbs where name='BDTPDB';

   INST_ID OPEN_MODE
---------- ----------
         1 MOUNTED
         2 MOUNTED

As you can see the PDB is mounted on both instances.

Now, let’s create and start a preferred/available service linked to our PDB that way:

srvctl add service -db SRCCONT -pdb BDTPDB -service BDTPDB_TAF -e select -m BASIC -P BASIC -r SRCCONT1 -a SRCCONT2 -w 5 -z 12
srvctl start service -d SRCCONT

Let’s check the PDB status and the service status:

SYS@CDB$ROOT> select inst_id,open_mode from gv$pdbs where name='BDTPDB';

   INST_ID OPEN_MODE
---------- ----------
         2 MOUNTED
         1 READ WRITE
srvctl status service -d SRCCONT
Service BDTPDB_TAF is running on instance(s) SRCCONT1

So, the PDB is open on the Instance that is hosting the service and not open on the other Instance:  The Objective is reached.

Now, let’s kill pmon for the SRCCONT1 Instance (that hosts the service), wait SRCCONT1 to be back and check the PDB and the service status:

kill -9 `ps -ef | grep -i pmon | grep -i SRCCONT1 | awk '{print $2}'`

--PDB Status
SYS@CDB$ROOT> select inst_id,open_mode from gv$pdbs where name='BDTPDB';

   INST_ID OPEN_MODE
---------- ----------
         2 READ WRITE
         1 MOUNTED

-- Service Status
srvctl status service -d SRCCONT
Service BDTPDB_TAF is running on instance(s) SRCCONT2

So, the service failed over, the PDB has been closed on the failing Instance and is open on the one that is now hosting the service: The Objective is reached.

Now, let’s (fail back) relocate the service manually and check the PDB and the service status:

srvctl relocate service -d SRCCONT -s BDTPDB_TAF -oldinst SRCCONT2 -newinst SRCCONT1 -force

--PDB Status
SYS@CDB$ROOT>  select inst_id,open_mode from gv$pdbs where name='BDTPDB';

   INST_ID OPEN_MODE
---------- ----------
         1 READ WRITE
         2 READ WRITE

-- Service Status
srvctl status service -d SRCCONT                                                                         
Service BDTPDB_TAF is running on instance(s) SRCCONT1

As you can see the service has been relocated and the PDB is now open on both Instances: The Objective is not reached.

To summarize:

  1. The objective has been reached when we created and started the service.
  2. The objective has been reached when the service failed over (Due to Instance crash).
  3. The objective has not been reached when the service has been relocated manually.

FAN callouts to the rescue:

FAN callouts are server-side scripts or executables that run whenever a FAN event is generated. I’ll use this feature to fix the third point where the objective is not reached.

To do so, I created the following script:

#!/bin/ksh
#
# Close the PDB on the old intance during manual service relocation
#
# Description:
# -  This script closes the PDB on the old intance during manual service relocation
#
# Requirement:
#  - Location of the script must be $ORA_CRS_HOME/racg/usrco/.
#
# Version:
#  - version 1.0
#  - Bertrand Drouvot
#
# Date: 2014/12/04
#
#
# Env settings

ME=`who | cut -d" " -f1`
PASSWDFILE="/etc/passwd"
HOMEDIR=`egrep "^${ME}" ${PASSWDFILE} | cut -d: -f 6`
DATE_LOG=$(date +"%y%m%d_%H%M")
LOGFILE=fan_cloe_pdb_${DATE_LOG}.log

VAR_EVENTTYPE=$1

for ARGS in $*;
 do
        PROPERTY=`echo $ARGS | cut -f1 -d"=" | tr '[:lower:]' '[:upper:]'`
        VALUE=`echo $ARGS | cut -f2 -d"=" | tr '[:lower:]' '[:upper:]'`
        case $PROPERTY in
                VERSION)      VAR_VERSION=$VALUE ;;
                SERVICE)      VAR_SERVICE=$VALUE ;;
                DATABASE)     VAR_DATABASE=$VALUE ;;
                INSTANCE)     VAR_INSTANCE=$VALUE ;;
                HOST)         VAR_HOST=$VALUE ;;
                STATUS)       VAR_STATUS=$VALUE ;;
                REASON)       VAR_REASON=$VALUE ;;
                CARD)         VAR_CARDINALITY=$VALUE ;;
                TIMESTAMP)    VAR_LOGDATE=$VALUE ;;
                ??:??:??)     VAR_LOGTIME=$VALUE ;;
        esac
 done 

# Close the PDB (PDB name extracted from the service name) if service relocated manually

if ( ([ $VAR_EVENTTYPE = "SERVICEMEMBER" ]) && ([ $VAR_STATUS = "DOWN" ]) && ([ $VAR_REASON = "USER" ]))
then

PATH=$PATH:/usr/local/bin
export pATH
ORAENV_ASK=NO
export ORAENV_ASK
ORACLE_SID=${VAR_INSTANCE}
export ORACLE_SID
. oraenv > /dev/null

# Extract PDB name based on our naming convention (You may need to change this)
PDB=`echo "${VAR_SERVICE}" | sed 's/_TAF.*//'`

# Close the PDB on the old instance
sqlplus /nolog <<EOF
connect / as sysdba;
alter pluggable database ${PDB} close instances=('${VAR_INSTANCE}');
EOF

# Log this in a logfile
echo ${*} >> ${HOMEDIR}/log/${LOGFILE}

fi

under $GRID_HOME/racg/usrco/ (on both nodes). Then the script will be executed for all FAN events, but the script will start processing only for manual service relocation (thanks to the “if condition” in it).

The goal of the script is to close the PDB on the old Instance during manual service relocation.

Now, let’s check if the objective is reached. Let’s relocate the service manually from SRCCONT1 to SRCCONT2:

-- Current PDB Status
SYS@CDB$ROOT> select inst_id,open_mode from gv$pdbs where name='BDTPDB';

   INST_ID OPEN_MODE
---------- ----------
         1 READ WRITE
         2 READ WRITE

-- Current service status
SYS@CDB$ROOT> !srvctl status service -d SRCCONT
Service BDTPDB_TAF is running on instance(s) SRCCONT1

-- Relocate the service
SYS@CDB$ROOT> !srvctl relocate service -d SRCCONT -s BDTPDB_TAF -oldinst SRCCONT1 -newinst SRCCONT2 -force

-- Check PDB status
SYS@CDB$ROOT> select inst_id,open_mode from gv$pdbs where name='BDTPDB';

   INST_ID OPEN_MODE
---------- ----------
         2 READ WRITE
         1 MOUNTED

-- Check service status
SYS@CDB$ROOT> !srvctl status service -d SRCCONT
Service BDTPDB_TAF is running on instance(s) SRCCONT2

So, As you can see the service has been manually relocated and the PDB has been closed to the old Instance. That way, the PDB is open on only one Instance: The Objective is reached.

Remarks:

  • I used the “force” option during the manual relocation which disconnects all the sessions during the relocate operation. If you are not using the “force” option, then I guess you don’t want to close the PDB on the old Instance ;-).
  • The PDB open/close behaviour with a service linked as preferred/available on it is the one described into this post, unless you have set the save state on it (default is discard state).

Conclusion:

The objective is reached in any case: The PDB is open on only one Instance at any time (with a preferred/available service in place). That way we don’t need to set the hidden parameter “_inmemory_auto_distribute” to false to get the In-Memory data fully populated on one Instance.

Posted in In-Memory, Rac | 1 Comment

In-Memory Instance distribution with RAC and Multitenant Environment

In a previous post (see In-Memory Instance distribution with RAC databases: I want at least one instance that contains all the data) I provided a way to get rid of the default In-Memory Instance distribution on a RAC (non CDB) database.

To summarize:

  • Thanks to the hidden parameter “_inmemory_auto_distribute” we are able to change the way the In-Memory Column Store are populated by default on an oracle RAC database.
  • By default all objects populated into memory will be distributed across all of the IM column stores in the cluster.
  • With “_inmemory_auto_distribute” set to false, then we are able to populate one instance (or all the instances) with all the data.

Now, let’s have a look at the In-Memory distribution on a Multitenant RAC database and let’s check if we can influence (get rid of the default behaviour) how the In-Memory is distributed across all the Instances per PDB.

By default the IMCS distribution for a PDB is the following:

First case: The PDB is open on one Instance only:

SYS@CDB$ROOT> alter pluggable database BDTPDB open instances=('BDT12c3_1');

Pluggable database altered.

-- Now connect to the BDTPDB PDB as bdt/bdt
SYS@CDB$ROOT> @connect_user_pdb.sql
Enter value for user: bdt
Enter value for pwd: bdt
Enter value for pdb: BDTPDB
Connected.

-- Let's enable the inmemory attribute on the BDT table
BDT@BDTPDB> alter table bdt inmemory ;

Table altered.

-- Trigger the IMCS population
BDT@BDTPDB> select count(*) from bdt;

  COUNT(*)
----------
  44591000

-- check the distribution
BDT@BDTPDB> SELECT s.inst_id, c.name,s.populate_status, s.inmemory_size, s.bytes, s.bytes_not_populated
FROM gv$im_user_segments s, v$pdbs c
WHERE s.segment_name = 'BDT'
and c.con_id=s.con_id; 

   INST_ID NAME                           POPULATE_ INMEMORY_SIZE      BYTES BYTES_NOT_POPULATED
---------- ------------------------------ --------- ------------- ---------- -------------------
         1 BDTPDB                         COMPLETED     695074816 1879048192                   0

As you can see the Instance 1 contains all the data for this PDB (as BYTES_NOT_POPULATED=0)

Second case: The PDB is open on all the Instances:

-- Open the BDTPDB PDB on all instances
SYS@CDB$ROOT> alter pluggable database BDTPDB open instances=all;

Pluggable database altered.

-- Now connect to the BDTPDB PDB as bdt/bdt
SYS@CDB$ROOT> @connect_user_pdb.sql
Enter value for user: bdt
Enter value for pwd: bdt
Enter value for pdb: BDTPDB
Connected.  

-- Let's enable the inmemory attribute on the BDT table
BDT@BDTPDB> alter table bdt inmemory ;

Table altered.

-- Trigger the IMCS population
BDT@BDTPDB> select count(*) from bdt;

  COUNT(*)  
----------  
  44591000

-- check the distribution
BDT@BDTPDB> SELECT s.inst_id, c.name,s.populate_status, s.inmemory_size, s.bytes, s.bytes_not_populated
FROM gv$im_user_segments s, v$pdbs c
WHERE s.segment_name = 'BDT'
and c.con_id=s.con_id;


   INST_ID NAME                           POPULATE_ INMEMORY_SIZE      BYTES BYTES_NOT_POPULATED
---------- ------------------------------ --------- ------------- ---------- -------------------
         1 BDTPDB                         COMPLETED     376307712 1879048192           802889728
         2 BDTPDB                         COMPLETED     298909696 1879048192          1006559232

We can see that (for this PDB), by default, no instance contains all data as the BYTES_NOT_POPULATED column is  greater than 0.

Now let’s set the hidden parameter _inmemory_auto_distribute” to false (at the PDB level):

BDT@BDTPDB> alter system set "_inmemory_auto_distribute"=false;
alter system set "_inmemory_auto_distribute"=false
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database

So, we can’t set this hidden parameter at the PDB level.

Ok, let’s set it at the CDB level:

SYS@CDB$ROOT> alter system set "_inmemory_auto_distribute"=false;

System altered.

then re-trigger the population on instance 1 (for this PDB) and check the distribution:

SYS@CDB$ROOT> @connect_user_pdb.sql
Enter value for user: bdt
Enter value for pwd: bdt
Enter value for pdb: BDTPDB
Connected.  
BDT@BDTPDB> select count(*) from bdt;

  COUNT(*)  
----------  
  44591000

BDT@BDTPDB> SELECT s.inst_id, c.name,s.populate_status, s.inmemory_size, s.bytes, s.bytes_not_populated
FROM gv$im_user_segments s, v$pdbs c
WHERE s.segment_name = 'BDT'
and c.con_id=s.con_id;

   INST_ID NAME                           POPULATE_ INMEMORY_SIZE      BYTES BYTES_NOT_POPULATED
---------- ------------------------------ --------- ------------- ---------- -------------------
         2 BDTPDB                         COMPLETED     297861120 1879048192          1006559232
         1 BDTPDB                         COMPLETED     675151872 1879048192                   0

BINGO! Look at the Instance 1, it now contains all the data in its IMCS (as BYTES_NOT_POPULATED=0) for this PDB.

If I connect on the Instance 2 (for this PDB) and launch the query to trigger the population, I’ll get:

   INST_ID NAME                           POPULATE_ INMEMORY_SIZE      BYTES BYTES_NOT_POPULATED
---------- ------------------------------ --------- ------------- ---------- -------------------
         1 BDTPDB                         COMPLETED     675151872 1879048192                   0
         2 BDTPDB                         COMPLETED     672006144 1879048192                   0

So, both instances now contain all the data for this PDB.

Remarks:

  • As I said into the previous post: I used an hidden parameter, so you should get oracle support approval to use it.
  • I guess (because I can’t test) that in a Multitenant and Engineered Systems context it is also possible to have all the data appear in the IM column store on every node for a PDB (thanks to the duplicate attribute): As Christian Antognini show us into this blog post for non CDB.

Conclusion:

The conclusion is the same than in my previous post (with non CDB):

  • Thanks to the hidden parameter “_inmemory_auto_distribute” we are able to change the way the In-Memory Column Store are populated by default on an oracle RAC database.
  • By default all objects populated into memory will be distributed across all of the IM column stores in the cluster.
  • With “_inmemory_auto_distribute” set to false, then we are able to populate one instance (or all the instances) with all the data.

And we can add that:

  • If the PDB is open on one instance only then this instance contains all the data.
  • If the PDB is open on all the instances then we have to set “_inmemory_auto_distribute” to false to have at least one instance that contains all the data.
  • Each PDB population will behave the same (distributed or not) and inherit from the CDB setting (As we can not set the hidden parameter at the PDB level).

 

Posted in In-Memory, Rac | 1 Comment