Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DAOS-16469 dtx: properly handle DTX partial commit #15335

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Nasf-Fan
Copy link
Contributor

@Nasf-Fan Nasf-Fan commented Oct 17, 2024

When a DTX leader globally commit the DTX, it is possible that some DTX participant(s) cannot commit such DTX entry because of kinds of issues, such as network or space trouble. Under such case, the DTX leader needs to keep the active DTX entry persistently for further commit/resync. But it does not means related modification attched to such DTX entry on the leader target cannot be committed, instead, we can commit related modification with only keeping the DTX header. That is enough for the DTX leader to do further DTX commit/resync to handle related former failed DTX participant(s).

The benefit is that VOS aggregation on the leader target will not be affected by remote DTX commit failure.

Allow-unstable-test: true

Before requesting gatekeeper:

  • Two review approvals and any prior change requests have been resolved.
  • Testing is complete and all tests passed or there is a reason documented in the PR why it should be force landed and forced-landing tag is set.
  • Features: (or Test-tag*) commit pragma was used or there is a reason documented that there are no appropriate tags for this PR.
  • Commit messages follows the guidelines outlined here.
  • Any tests skipped by the ticket being addressed have been run and passed in the PR.

Gatekeeper:

  • You are the appropriate gatekeeper to be landing the patch.
  • The PR has 2 reviews by people familiar with the code, including appropriate owners.
  • Githooks were used. If not, request that user install them and check copyright dates.
  • Checkpatch issues are resolved. Pay particular attention to ones that will show up on future PRs.
  • All builds have passed. Check non-required builds for any new compiler warnings.
  • Sufficient testing is done. Check feature pragmas and test tags and that tests skipped for the ticket are run and now pass with the changes.
  • If applicable, the PR has addressed any potential version compatibility issues.
  • Check the target branch. If it is master branch, should the PR go to a feature branch? If it is a release branch, does it have merge approval in the JIRA ticket.
  • Extra checks if forced landing is requested
    • Review comments are sufficiently resolved, particularly by prior reviewers that requested changes.
    • No new NLT or valgrind warnings. Check the classic view.
    • Quick-build or Quick-functional is not used.
  • Fix the commit message upon landing. Check the standard here. Edit it to create a single commit. If necessary, ask submitter for a new summary.

Copy link

github-actions bot commented Oct 17, 2024

Ticket title is 'IOR Easy performance low with EC_16P2GX'
Status is 'In Progress'
Labels: 'daos_ecb_scale,pre_acceptance_issues'
Job should run at elevated priority (1)
https://daosio.atlassian.net/browse/DAOS-16469

@Nasf-Fan Nasf-Fan force-pushed the Nasf-Fan/DAOS-16469_6 branch 5 times, most recently from 77246bd to 38f3d76 Compare October 23, 2024 01:51
@daosbuild1
Copy link
Collaborator

Test stage Functional Hardware Medium Verbs Provider completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15335/7/execution/node/1565/log

@github-actions github-actions bot added the priority Ticket has high priority (automatically managed) label Oct 25, 2024
@Nasf-Fan Nasf-Fan force-pushed the Nasf-Fan/DAOS-16469_6 branch 2 times, most recently from d0a8ee1 to 742ad75 Compare October 28, 2024 07:15
@Nasf-Fan Nasf-Fan marked this pull request as ready for review October 29, 2024 01:28
@Nasf-Fan Nasf-Fan requested review from a team as code owners October 29, 2024 01:28
liuxuezhao
liuxuezhao previously approved these changes Nov 5, 2024
rc = dtx_refresh(dth, ioc->ioc_coc);
if (rc == -DER_AGAIN)
goto again;
if (!obj_rpc_is_fetch(rpc) || retry < 30) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if retry > 30 times for fetch, -DER_INPROGRESS will return to client?
then client will retry again right? do you think it is better than always call refresh here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, if dtx_refresh failed so frequently, we will stop the server-side retry and return -DER_INPROGRESS to client. Not sure whether it is better, but It will at least reduce the possibility of too much ULTs caused system busy.

@daosbuild1
Copy link
Collaborator

Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15335/12/testReport/

@daosbuild1
Copy link
Collaborator

Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15335/13/testReport/

@daosbuild1
Copy link
Collaborator

Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15335/14/testReport/

@daosbuild1
Copy link
Collaborator

Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15335/15/testReport/

@daosbuild1
Copy link
Collaborator

Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15335/16/testReport/

@daosbuild1
Copy link
Collaborator

Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15335/17/testReport/

@daosbuild1
Copy link
Collaborator

Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15335/18/testReport/

@daosbuild1
Copy link
Collaborator

Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15335/19/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Functional Hardware Medium Verbs Provider completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15335/19/execution/node/1482/log

@daosbuild1
Copy link
Collaborator

Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15335/20/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Functional Hardware Medium completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15335/20/execution/node/1462/log

When a DTX leader globally commit the DTX, it is possible that some
DTX participant(s) cannot commit such DTX entry because of kinds of
issues, such as network or space trouble. Under such case, the DTX
leader needs to keep the active DTX entry persistently for further
commit/resync. But it does not means related modification attched
to such DTX entry on the leader target cannot be committed, instead,
we can commit related modification with only keeping the DTX header.
That is enough for the DTX leader to do further DTX commit/resync
to handle related former failed DTX participant(s).

The benefit is that VOS aggregation on the leader target will not
be affected by remote DTX commit failure.

Allow-unstable-test: true

Signed-off-by: Fan Yong <fan.yong@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority Ticket has high priority (automatically managed)
Development

Successfully merging this pull request may close these issues.

3 participants