A closer look at ASM rebalance, Part I: Disks have been added

This article is the first Part of the “A closer look at ASM rebalance” series:

  1. Part I: Disks have been added.
  2. Part II: Disks have been dropped.
  3. Part III: Disks have been added and dropped (at the same time).

If you are not familiar with ASM rebalance I would suggest first to read those 2 blog posts written by Bane Radulovic:

In this part I want to visualize the rebalance operation (with 3 power values: 2,6 and 11) after disks have been added (no dropped disks yet: It will be for the parts II and III).

To do so, on a 2 nodes Extended Rac Cluster (11.2.0.4), I added 2 disks into the DATA diskgroup (created with an ASM Allocation Unit of 4MB) and launched (connected on +ASM1):

  1. alter diskgroup DATA rebalance power 2; (At 11:55 AM).
  2. alter diskgroup DATA rebalance power 6; (At 12:05 PM).
  3. alter diskgroup DATA rebalance power 11; (At 12:15 PM).

And then I waited until it finished (means v$asm_operation returns no rows for the DATA diskgroup).

Note that 2) and 3) interrupted the rebalance in progress and launched a new one with a new power.

During this amount of time I collected the ASM performance metrics that way for the DATA diskgroup only.

I’ll present the results with Tableau (For each Graph I’ll keep the “columns”, “rows” and “marks” shelf into the print screen so that you can reproduce).

Note: There is no database activity on the Host where the rebalance has been launched.

Here are the results:

First let’s verify that the whole rebalance activity has been done on the +ASM1 instance (As I launched the rebalance operations from it).

Screen Shot 2014-08-25 at 18.19.34

We can see:

  1. That all Read and Write rebalance activity has been done on +ASM1 .
  2. That the read throughput is very close to the write throughput on +ASM1.
  3. The impact of the power values (2,6 and 11) on the throughput.

Now I would like to compare the behavior of 2 Sets of Disks: The disks added and the disks that are already part of the DATA diskgroup.

To do so, let’s create in Tableau a SET that contains the 2 new disks.

Screen Shot 2014-08-20 at 21.27.34

Let’s call it “New Disks”

Screen Shot 2014-08-20 at 21.29.42

So that now we are able to display the ASM metrics IN this set (the 2 new disks) and OUT this set (the already existing disks means already part of the DATA diskgroup).

I will filter the metrics on ASM1 only (to avoid any “little parasites” coming from ASM2).

Let’s visualize the Reads/s and Writes/s metrics:

Screen Shot 2014-08-25 at 18.26.10

We can see that during the 3 rebalances:

  1. No Reads on the new disks (at least until about 12:40 pm).
  2. Number of Writes/s increasing on the new disks depending of the power values.
  3. Reads/s and Writes/s both increasing on the already existing disks depending of the power values.
  4. As of 12.40 pm, activity on the existing disks while near zero activity on the new ones.
  5. As of 12.40 pm number of Writes/s >= Reads/s on the existing disks (while it was the opposite before).
  • Are 1, 2 and 3 surprising? No.
  • What happened for 4 and 5? I’ll answer later on.

Let’s visualize the Kby Read/s and Kby Write/s metrics:

Screen Shot 2014-08-25 at 18.31.59

We can see that during the 3 rebalances:

  1. No Kby Read/s on the new disks.
  2. Number of Kby Write/s increasing on the new disks depending of the power values.
  3. Kby Read/s and Kby Write/s both increasing on the existing disks depending of the power values.
  4. As of 12.40 pm, activity on the existing disks while no activity on the new ones.
  5. As of 12.40 pm same amount of Kby Read/s and Kby Write/s on the existing disks (while it was not the case before).
  • Are 1, 2 and 3 surprising? No.
  • What happened for 4 and 5? I’ll answer later on.

Let’s visualize the Average By/Read and Average By/Write metrics:

Important remark regarding the averages computation/display: The By/Read and By/Write measures depend on the number of reads. So the averages have to be calculated using Weighted Averages.

Let’s create the calculated field in Tableau for the By/Read Weighted Average:

Screen Shot 2014-08-20 at 21.56.49

The same has to be done for the By/Write Weighted Average.

Let’s see the result:

Screen Shot 2014-08-25 at 18.38.07

We can see:

  1. The Avg By/Write on the new disks is about the same (about 1MB) whatever the power value is (before 12:40 pm).
  2. The Avg By/Write tends to increase with the power on the already existing disks.
  3. The Avg By/Read on the existing disks is about the same (about 1MB) whatever the power value is.
  • Is 1 surprising? No.
  • Is 2 surprising? Yes (at least for me).
  • Is 3 surprising? No.

Now that we have seen all those metrics, we can ask:

Q1: So what the hell happened at 12:40 pm?

Let’s check the alert_+ASM1.log file at that time:

Mon Aug 25 12:15:44 2014
ARB0 started with pid=33, OS id=1187132
NOTE: assigning ARB0 to group 4/0x1e089b59 (DATA) with 11 parallel I/Os
Mon Aug 25 12:15:47 2014
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
cellip.ora not found.
Mon Aug 25 12:39:52 2014
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=4
Mon Aug 25 12:40:03 2014
GMON updating for reconfiguration, group 4 at 372 for pid 35, osid 1225810
NOTE: group DATA: updated PST location: disk 0014 (PST copy 0)
NOTE: group DATA: updated PST location: disk 0015 (PST copy 1)
Mon Aug 25 12:40:03 2014
NOTE: group 4 PST updated.
Mon Aug 25 12:40:03 2014
NOTE: membership refresh pending for group 4/0x1e089b59 (DATA)
GMON querying group 4 at 373 for pid 18, osid 67864
SUCCESS: refreshed membership for 4/0x1e089b59 (DATA)
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
Mon Aug 25 12:45:24 2014
NOTE: F1X0 copy 2 relocating from 18:44668 to 18:20099 for diskgroup 4 (DATA)
Mon Aug 25 12:53:49 2014
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 4/0x1e089b59 (DATA)

We can see that the ASM rebalance started the compacting phase (See Bane Radulovic’s blog post for more details about the ASM rebalances phases).

Q2: The ASM Allocation Unit size is 4MB and the Avg By/Read is stucked to 1MB,why?

I guess this is somehow related to the max_sectors_kb and max_hw_sectors_kb SYSFS parameters. It will be the subject of another post.

Two remarks before to conclude:

  1. The ASM rebalance activity is not recorded into the v$asm_disk_iostat viewIt is recorded into the v$asm_disk_stat view. So, if you are using the asm_metrics utility, you have to change the asm_feature_version variable to a value > your ASM instance version.
  2. I tested with compatible.asm set to 10.1 and 11.2.0.2 and observed the same behavior for all those metrics.

Conclusion of Part I:

  • We visualized that the compacting phase of the rebalance operation generates much more activity on the existing disks compare to near zero activity on the new disks.
  • We visualized that the compacting phase of the rebalance operation generates the same amount of Kby Read/s and Kby Write/s on the existing disks (while it was not the case before).
  • We visualized that during the compacting phase the number of Writes/s >= Reads/s on the existing disks (while it was the opposite before).
  • We visualized that the Avg By/Read does not exceed 1MB on the existing disks (while the ASM allocation Unit has been set to 4MB on my diskgroup).
  • We visualized that the Avg By/Write tends to increase with the power on the already existing disks (quite surprising to me).
  • I’ll update this post with ASM 12c results as soon as I can (if something new needs to be told).
Advertisement

Are ASM rebalance and preferred read working together?

Introduction:

If I add disks into a diskgroup, then during the rebalance operation, ASM needs to read the data coming from the disks already part of the diskgroup to rebalance them on all the disks (including the new ones).

Question:

If I add 2 disks (one into each failgroup) is the preferred feature took into account for the rebalance process? (“for” means “for the reads generated by the rebalance operation”).

Let’s see:

  • Set the preferred read on +ASM1 (so that +ASM1 “prefers” to read from the “HOST31” failgroup):
SQL> alter system set asm_preferred_read_failure_groups='DATA.HOST31';

System altered.
  • Add 2 disks (one into each failgroup) into the DATA diskgroup (connected on +ASM1):
SQL> alter diskgroup DATA add failgroup HOST31 disk '/dev/san/HOST31CA8D0D' failgroup HOST32 disk '/dev/san/HOST32CA8D0D';

Diskgroup altered.
  • Check that the ASM compatibility is high enough (>=11.1) to use the preferred read feature:
SQL> select COMPATIBILITY from v$asm_diskgroup where NAME='DATA';

COMPATIBILITY
------------------------------------------------------------
11.2.0.2.0
  • Launch the rebalance:
SQL> alter diskgroup DATA rebalance power 2;

Diskgroup altered.

Now, let’s collect the ASM metrics that way and visualize the result with Tableau.

Note: During the rebalance near zero database activity occurred so that near 100% of the activity is coming from the rebalance process.

Result:

Screen Shot 2014-08-23 at 18.08.39

As you can see:

  1. The +ASM1 instance reads from the HOST31 and the HOST32 failgroups: It did not take into account the preferred read.
  2. I changed the power during the rebalance just for the fun 😉

Remark:

It has been tested on a 11.2.0.4 extended RAC (Still need to test on 12c).

Conclusion:

  • The ASM preferred read feature is not took into account for the rebalance process.
  • I guess it is still took into account for the reads coming from the databases during the rebalance process.

Simulate and Visualize the impact of the ASM preferred feature on the read IOPS and throughput

Suppose that you decided to put the ASM preferred feature in place because you observed that the read latency is too high on the farthest disk array (You can find how you can lead to this conclusion with the use case 3 into this post).

So, you want to enable the ASM preferred read feature so that:

  1. The +ASM1 instance “prefers” to read from the “WIN” failgroup.
  2. The +ASM2 instance “prefers” to read from the “JMO” failgroup.

But doing so may have an impact on the number of read IOPS and the throughput repartition per host/disk array because:

  1. The “previous” ASM1 to JMO reads will now be done on the “WIN” array.
  2. The “previous” ASM2 to WIN reads will now be done on the “JMO” array.

Of course, the total number of read operations and throughput will not change, but the repartition across the failgroup (disk array) may change with the ASM preferred read feature in place.

Question:

  • Is the architecture able to deal with this new read repartition?

To answer this question I will:

  1. Collect the ASM metrics during a certain amount of time (without the ASM preferred read in place) and produce a csv file as described here.
  2. Visualize the ASM metrics with Tableau and simulate the impact of the preferred read feature on the read IOPS and the throughput repartition.

Once the csv file is ready (means you collected a representative workload), let’s check what the current workload is (Without the ASM preferred read in place).

For the Kby Read/s measure:

We can visualize it that way with Tableau (I keep the “columns”, “rows” and “marks” shelf into the print screen so that you can reproduce).

Screen Shot 2014-08-10 at 18.45.03

For the Reads/s measure:

Screen Shot 2014-08-11 at 11.07.01We can see the read IOPS and the throughput repartition by failgroup and ASM instances. We can see that the read IOPS and the throughput are equally distributed over the Failgroups (It is the expected behaviour without the ASM preferred read in place).

Now, what If we implement the ASM preferred feature? What would be the impact on the read IOPS and the throughput repartition?

To simulate and visualize the impact, let’s create this “New FG for Read operations” calculated field:

Screen Shot 2014-08-11 at 11.10.01

Basically it simulates the ASM preferred Read in place by assigning the failgroup per ASM instances.

Now, let’s simulate and visualize the impact of the ASM preferred read feature (should it be implemented) using the same csv file and this calculated field as dimension.

For the Kby Read/s measure:

Screen Shot 2014-08-11 at 11.12.56

Note that the throughput repartition would not be the same and that the peak are higher (> 200 Mo/s compare to about 130 Mo/s without the ASM preferred read).

For the Reads/s measure:

Screen Shot 2014-08-11 at 11.14.31

Note that the read IOPS repartition would not be the same and that the peak on the WIN failgroup is higher (about 8000 Reads/s compare to about 5000 Reads/s without the ASM preferred read).

Now you can check (with your Systems and Storage administrators) if your current architecture would be able to deal with this new repartition.

Remarks:

  • ASM is not performing any reads for the database, it records metrics for the database instances that it is servicing.

Conclusion:

We have been able to simulate and visualize the impact of the ASM preferred read feature on the read IOPS and the throughput repartition without actually implementing it.