-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DAOS-16328 control: Update dmg pool list for MD-on-SSD #15490
Conversation
Ticket title is 'Update dmg pool list for MD-on-SSD P2' |
Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15490/1/testReport/ |
Test stage Functional Hardware Medium MD on SSD completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15490/1/execution/node/1483/log |
9fdaa23
to
76e993c
Compare
Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15490/3/testReport/ |
76e993c
to
4dc7047
Compare
Test stage Unit Test on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15490/4/testReport/ |
Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15490/4/testReport/ |
4dc7047
to
36ef9ba
Compare
Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15490/5/testReport/ |
Features: control Skip-func-hw-test-medium-md-on-ssd: false Skip-func-hw-test-large-md-on-ssd: false Required-githooks: true Signed-off-by: Tom Nabarro <tom.nabarro@intel.com>
36ef9ba
to
c6ebd52
Compare
Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15490/6/testReport/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just one minor remark which could be fixed in a follow-up PR if needed.
Jenkins CI failing on unrelated NLT valgrind errors, otherwise passing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor suggestions, nothing blocking.
@@ -533,7 +533,7 @@ func poolQueryInt(ctx context.Context, rpcClient UnaryInvoker, req *PoolQueryReq | |||
return resp, err | |||
} | |||
|
|||
// UpdateState update the pool state. | |||
// UpdateState update the pool state based on response field values. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
spelling
// UpdateState update the pool state based on response field values. | |
// UpdateState updates the pool state based on response field values. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"update" I think is valid as it describes the functionality from the perspective of the function itself e.g. as a function I "update blah blah blah"
by some storage targets while space is still available on others. Again | ||
for the NVMe or DATA tier. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wording nit.
by some storage targets while space is still available on others. Again | |
for the NVMe or DATA tier. | |
by some storage targets while space is still available on others. Applies only | |
for the NVMe or DATA tier. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will update in a follow-on
- the total pool size | ||
- the percentage of used space (i.e., 100 * used space / total space) | ||
- the imbalance percentage indicating whether data distribution across | ||
- The total pool size (NVMe or DATA tier, not including Metadata tier). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the SCM-only case (admittedly a corner case at this point), nothing has changed, has it? May be worth calling out here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can update in a follow-on, the behaviour for PMem has changed because only NVMe tier is displayed in the default mode, as you have said nothing changes for SCM-only (and there is no equivalent supported mode for MD-on-SSD)
Update dmg pool list -v tabular output for MD-on-SSD. Change column
titles to describe META and DATA tiers (rather than SCM and
NVME as for PMem mode).
Correct confusing behaviour by making Used and Imbalance table values
in non-verbose output refer to DATA tier always and don’t vary based
on value comparison. With previous implementation the-most-used and
the-most-imbalanced tier value would be shown and different tier
values could be shown in the same row (DAOS-12900).
Hide usage columns in table output if --no-query is set (DAOS-16701).
Hide upgrade and disabled columns if --no-query -v set (DAOS-16296).
Features: control
Skip-func-hw-test-medium-md-on-ssd: false
Skip-func-hw-test-large-md-on-ssd: false
Required-githooks: true
Before requesting gatekeeper:
Features:
(orTest-tag*
) commit pragma was used or there is a reason documented that there are no appropriate tags for this PR.Gatekeeper: