-
Notifications
You must be signed in to change notification settings - Fork 37
Volume latency for flexgroup
As mentioned in KB article, the ONTAP CLI command, qos statistics volume latency show
should be used to monitor volume latency. In Harvest, the same metrics are collected by uncommenting the workload_volume.yaml
line in your conf/zapiperf/default.yaml
file.
Setup Steps
On client machine
mount -t nfs 10.193.48.24:/umeng_harvest_vol1 /tmp/foo
Create volume traffic
rsync -ah --progress /tmp/foo/ /tmp
A large file was read from an NFS mounted directory that was created as a flexgroup with six constituents. While the file was being copied locally, we compared the read latencies reported by ONTAP via:
- ONTAP CLI
- Performance ZAPIs gathered by Harvest for volume and workload_volume templates
- ONTAP archive files
To make it easier to compare values across the same interval of times, we changed the ZAPI templates to collect performance metrics every minute instead of the out-of-the-box schedule of every three minutes.
The volume_avg_latency
metric is collected by the conf/zapiperf/cdot/9.8.0/volume.yaml
template and closely matches what ONTAP reports via the CLI command statistics volume show -interval 60 -iterations 10
as shown below.
statistics volume show -interval 60 -iterations 10
Below are the first three samples from the CLI. We're interested in the latency, which is the last column. As you can see, the first sample has a latency of 70 microseconds and corresponds to A in the graph above collected by Harvest.
umeng-aff300-05-06::> statistics volume show -interval 60 -iterations 5 -volume umeng_harvest_vol1__0001
umeng-aff300-05-06 : 3/28/2023 09:32:48
*Total Read Write Other Read Write Latency
Volume Vserver Ops Ops Ops Ops (Bps) (Bps) (us)
------------------------ -------------------- ------ ---- ----- ----- ----- ------ -------
umeng_harvest_vol1__0001 umeng-aff300-05-svm1 47 2 35 3 31528 262837 138
umeng-aff300-05-06 : 3/28/2023 09:33:48
umeng_harvest_vol1__0001 umeng-aff300-05-svm1 102 2 94 3 48883 1261668
160
umeng-aff300-05-06 : 3/28/2023 09:34:49
*Total Read Write Other Read Write Latency
Volume Vserver Ops Ops Ops Ops (Bps) (Bps) (us)
------------------------ -------------------- ------ ---- ----- ----- ----- ------ -------
umeng_harvest_vol1__0001 umeng-aff300-05-svm1 72 2 65 3 50629 864709 163
umeng-aff300-05-06 : 3/28/2023 09:35:48
umeng_harvest_vol1__0001 umeng-aff300-05-svm1 59 2 53 3 43736 542626 160
umeng-aff300-05-06 : 3/28/2023 09:36:45
*Total Read Write Other Read Write Latency
Volume Vserver Ops Ops Ops Ops (Bps) (Bps) (us)
------------------------ -------------------- ------ ---- ----- ----- -------- -------- -------
umeng_harvest_vol1__0001 umeng-aff300-05-svm1 538 193 329 4 12487383 13700651 169
The qos_latency
metric is collected by conf/zapiperf/cdot/9.8.0/workload_volume.yaml
template and closely matches what ONTAP reports via the CLI command qos statistics volume latency show -volume umeng_harvest_vol1 -iterations 10 -vserver umeng-aff300-05-svm1
qos statistics volume latency show -volume umeng_harvest_vol1 -iterations 10 -vserver umeng-aff300-05-svm1
NOTES:
- prometheus is in different timezone to cluster hence the time difference in image vs cli
- the
qos statistic
command won't print anything for workloads without activity - this command updates every second - rows at the top happen before rows at the bottom (time flows down)
- these values match closely with the Perf ZAPI values shown in Prometheus above
Below are the first three seconds of output from the CLI. We're interested in the latency, which is the third column. As you can see, the first sample has a latency of 124 microseconds and corresponds to A in the graph above collected by Harvest.
umeng-aff300-05-06::> qos statistics volume latency show -volume umeng_harvest_vol1 -iterations 10 -vserver umeng-aff300-05-svm1
Workload ID Latency Network Cluster Data Disk QoS Max QoS Min NVRAM Cloud FlexCache SM Sync VA
--------------- ------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
-total- - 373.00us 96.00us 100.00us 114.00us 0ms 0ms 0ms 63.00us 0ms 0ms 0ms 0ms
umeng_harvest.. 38303 344.00us 144.00us 0ms 130.00us 0ms 0ms 0ms 70.00us 0ms 0ms 0ms 0ms
-total- - 455.00us 80.00us 124.00us 166.00us 0ms 0ms 0ms 85.00us 0ms 0ms 0ms 0ms
umeng_harvest.. 38303 360.00us 63.00us 0ms 211.00us 0ms 0ms 0ms 86.00us 0ms 0ms 0ms 0ms
-total- - 349.00us 71.00us 104.00us 110.00us 0ms 0ms 0ms 64.00us 0ms 0ms 0ms 0ms
umeng_harvest.. 38303 235.00us 61.00us 0ms 109.00us 0ms 0ms 0ms 65.00us 0ms 0ms 0ms 0ms
-total- - 377.00us 78.00us 120.00us 115.00us 1.00us 0ms 0ms 63.00us 0ms 0ms 0ms 0ms
umeng_harvest.. 38303 263.00us 81.00us 0ms 125.00us 0ms 0ms 0ms 57.00us 0ms 0ms 0ms 0ms
-total- - 984.00us 83.00us 142.00us 669.00us 0ms 0ms 0ms 90.00us 0ms 0ms 0ms 0ms
umeng_harvest.. 38303 256.00us 85.00us 0ms 118.00us 0ms 0ms 0ms 53.00us 0ms 0ms 0ms 0ms
-total- - 367.00us 78.00us 118.00us 109.00us 0ms 0ms 0ms 62.00us 0ms 0ms 0ms 0ms
umeng_harvest.. 38303 281.00us 76.00us 0ms 132.00us 0ms 0ms 0ms 73.00us 0ms 0ms 0ms 0ms
-total- - 383.00us 76.00us 126.00us 119.00us 0ms 0ms 0ms 62.00us 0ms 0ms 0ms 0ms
umeng_harvest.. 38303 317.00us 66.00us 0ms 167.00us 43.00us 0ms 0ms 41.00us 0ms 0ms 0ms 0ms
-total- - 396.00us 92.00us 114.00us 122.00us 0ms 0ms 0ms 68.00us 0ms 0ms 0ms 0ms
umeng_harvest.. 38303 262.00us 59.00us 0ms 151.00us 0ms 0ms 0ms 52.00us 0ms 0ms 0ms 0ms
-total- - 533.00us 70.00us 110.00us 277.00us 0ms 0ms 0ms 76.00us 0ms 0ms 0ms 0ms
umeng_harvest.. 38303 449.00us 58.00us 0ms 315.00us 0ms 0ms 0ms 76.00us 0ms 0ms 0ms 0ms
-total- - 306.00us 71.00us 53.00us 116.00us 0ms 0ms 0ms 66.00us 0ms 0ms 0ms 0ms
umeng_harvest.. 38303 252.00us 67.00us 0ms 117.00us 0ms 0ms 0ms 68.00us 0ms 0ms 0ms 0ms
Below is a data sample collected for constituent 1 from an archive file for the same period of time shown above.
avg_latency = delta(avg latency raw)/ delta(iops raw)