Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add StalledDiskPrimary analysis and recovery by vtorc #16050

Closed
wants to merge 9 commits into from

Conversation

joekelley
Copy link

@joekelley joekelley commented Jun 4, 2024

Description

At HubSpot we have had a handful of incidents where a primary becomes impaired due to disk issues. When this happens, we observe that vtorc assigns an UnreachablePrimary analysis and does nothing because the FullStatus
call it makes to the tablet times out. We monitor for these cases outside of Vitess and resolve them by running ERS, but it would be ideal if vtorc could detect and address these cases itself.

This change adds support for a StalledDiskPrimary analysis and recovery by vtorc. To detect when a tablet has a stalled disk we add a FileSystemManager that attempts to write a file to vt data root every five seconds and expose a method that the tablet manager invokes in FullStatus to report whether the disk is stalled. Vtorc is modified to check for the stalled disk error from FullStatus and record the result in the Cleanup block.

We are in the process of testing a change to this effect in lower environments at HubSpot. We aren't running the latest version of Vitess, so our internal patch is a bit different than what is presented here and this exact implementation hasn't been tested. This is my first Vitess PR. Any and all feedback is greatly appreciated 🙂

Related Issue(s)

Slack discussion from this time last year: https://vitess.slack.com/archives/C02GSRZ8XAN/p1685456224040299
PR that came from that discussion but didn't get merged: #13207

Note that this implementation is heavily inspired by the comments on #13207.

Checklist

  • "Backport to:" labels have been added if this change should be back-ported to release branches
  • If this change is to be back-ported to previous releases, a justification is included in the PR description
  • Tests were added or are not required
  • Did the new or modified tests pass consistently locally and on CI?
  • Documentation was added or is not required

Deployment Notes

Copy link
Contributor

vitess-bot bot commented Jun 4, 2024

Review Checklist

Hello reviewers! 👋 Please follow this checklist when reviewing this Pull Request.

General

  • Ensure that the Pull Request has a descriptive title.
  • Ensure there is a link to an issue (except for internal cleanup and flaky test fixes), new features should have an RFC that documents use cases and test cases.

Tests

  • Bug fixes should have at least one unit or end-to-end test, enhancement and new features should have a sufficient number of tests.

Documentation

  • Apply the release notes (needs details) label if users need to know about this change.
  • New features should be documented.
  • There should be some code comments as to why things are implemented the way they are.
  • There should be a comment at the top of each new or modified test to explain what the test does.

New flags

  • Is this flag really necessary?
  • Flag names must be clear and intuitive, use dashes (-), and have a clear help text.

If a workflow is added or modified:

  • Each item in Jobs should be named in order to mark it as required.
  • If the workflow needs to be marked as required, the maintainer team must be notified.

Backward compatibility

  • Protobuf changes should be wire-compatible.
  • Changes to _vt tables and RPCs need to be backward compatible.
  • RPC changes should be compatible with vitess-operator
  • If a flag is removed, then it should also be removed from vitess-operator and arewefastyet, if used there.
  • vtctl command output order should be stable and awk-able.

@vitess-bot vitess-bot bot added NeedsBackportReason If backport labels have been applied to a PR, a justification is required NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsIssue A linked issue is missing for this Pull Request NeedsWebsiteDocsUpdate What it says labels Jun 4, 2024
@github-actions github-actions bot added this to the v21.0.0 milestone Jun 4, 2024
Copy link

codecov bot commented Jun 4, 2024

Codecov Report

Attention: Patch coverage is 72.63158% with 26 lines in your changes missing coverage. Please review.

Project coverage is 68.66%. Comparing base (f4591fb) to head (abd5444).
Report is 330 commits behind head on main.

Files with missing lines Patch % Lines
go/vt/vttablet/tabletmanager/filesystemmanager.go 78.78% 14 Missing ⚠️
go/vt/vtorc/inst/instance_dao.go 54.54% 5 Missing ⚠️
go/vt/vttablet/tabletmanager/tm_init.go 25.00% 3 Missing ⚠️
go/vt/vttablet/tabletmanager/rpc_replication.go 0.00% 2 Missing ⚠️
go/vt/vtorc/config/config.go 66.66% 1 Missing ⚠️
go/vt/vtorc/logic/topology_recovery.go 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main   #16050      +/-   ##
==========================================
+ Coverage   68.23%   68.66%   +0.43%     
==========================================
  Files        1541     1549       +8     
  Lines      197254   199172    +1918     
==========================================
+ Hits       134597   136763    +2166     
+ Misses      62657    62409     -248     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@GuptaManan100 GuptaManan100 added Type: Enhancement Logical improvement (somewhere between a bug and feature) Component: VTorc Vitess Orchestrator integration and removed NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsWebsiteDocsUpdate What it says NeedsBackportReason If backport labels have been applied to a PR, a justification is required labels Jun 19, 2024
Comment on lines 59 to 63
// Return error if the disk is stalled or rejecting writes.
// Noop by default, must be enabled with the flag "enable_stalled_disk_check".
if tm.fsManager.IsDiskStalled() {
return nil, errors.New("stalled disk")
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's better to have a boolean field or an error field here. Even if the disk is stalled, we do get all the other field information back in FullStatus that can be used. Also, an error in full status indicates to vtorc that it couldn't reach the vttablet.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I developed this change by using fsfreeze -f /vt on the primary instance in a test keyspace. In my testing before making any code changes, I found that vtorc's invocation of FullStatus would timeout and return a context deadline exceeded error. Simple queries like select @@global.server_id; would hang until the filesystem was unfrozen.

We could add a boolean or error field to model this, but I think we'd want to return here either way if the check fails, and it seems a bit cleaner to me to return an error rather than a response message with mostly nil/zero values.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's a good architectural design to rely on the error message containing stalled disk. It is better to have that as a boolean field in the FullStatus output. The way the code is written now, we will always have to have the error message have stalled disk for backward compatibility. We can leave all other fields empty if this new boolean is set, but in my opinion we shouldn't rely on the contents of the error message.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @GuptaManan100 here, a boolean in FullStatus would also allow users of vtctldclient to see the status using GetFullStatus

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, I'll make this change 👍

Comment on lines 203 to 207
if err != nil {
if config.Config.EnableStalledDiskPrimaryAnalysis && strings.Contains(err.Error(), "stalled disk") {
stalledDisk = true
}
goto Cleanup
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we don't send the stalled disk as an error back as suggested ☝️, we can set the value in the normal flow below and it would be written as part of mkInsertOdkuInstances, and we won't need to change UpdateInstanceLastChecked.

type writeFunction func() error

func attemptFileWrite() error {
file, err := os.Create(path.Join(env.VtDataRoot(), ".stalled_disk_check"))
Copy link
Contributor

@timvaillancourt timvaillancourt Jun 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there any cases where VTDATAROOT isn't the same disk as MySQL datadir? For our deployment it IS the same disk - our datadir is one level below VTDATAROOT. But theoretically datadir could be on it's own disk 🤔

It might be more flexible to have a flag like --stalled_disk_check_dir / stalled_disk_check_root enable this feature vs a boolean "enable" flag

@timvaillancourt
Copy link
Contributor

@joekelley this is awesome, thanks a lot for moving this forward!

@joekelley
Copy link
Author

Thanks for the suggestions here! I pushed a few changes to improve the readability of the file system manager and fixed a bug where it would attempt to run concurrent file writes in the event that a write exceeded the timeout. Now it will skip attempting to write a file if there is already a slow write in progress.

Additionally I increased the default write timeout to 30s. We found in our testing that 2s was far too low and would lead to many false positives and unnecessary failovers. By increasing to 30s we hope to only capture disk stalls that would lead to prolonged downtime rather than transient blips in the storage layer.

@deepthi
Copy link
Member

deepthi commented Jul 11, 2024

@joekelley I'll get this re-reviewed, but in the meantime, can you fix the failing DCO check? We won't be able to merge without that. Once that is fixed, we can re-run CI as well.

Joe Kelley added 6 commits July 11, 2024 13:45
Signed-off-by: Joe Kelley <jkelley@hubspot.com>
Signed-off-by: Joe Kelley <jkelley@hubspot.com>
Signed-off-by: Joe Kelley <jkelley@hubspot.com>
…cess failovers

Signed-off-by: Joe Kelley <jkelley@hubspot.com>
Signed-off-by: Joe Kelley <jkelley@hubspot.com>
Signed-off-by: Joe Kelley <jkelley@hubspot.com>
@deepthi
Copy link
Member

deepthi commented Jul 11, 2024

Looks like there isn't a linked issue. That is something we require for anything other than trivial doc changes or code cleanup type of PRs. It will be good if you can create one, and also look at the test failures in order to get those out of the way before reviewers come back.

Joe Kelley added 2 commits July 12, 2024 11:21
Signed-off-by: Joe Kelley <jkelley@hubspot.com>
Signed-off-by: Joe Kelley <jkelley@hubspot.com>
@timvaillancourt
Copy link
Contributor

timvaillancourt commented Jul 12, 2024

Following this change, something I'd to explore is MySQL itself providing the signal that the disk is stalled. I wasn't able to find this easily in #13207 and opted for a similar approach as here, but I imagine there is a way now or in the future, with some sort of performance_schema query or metric that doesn't exist yet (blocked/stalled writes?) - for now MySQL just waits forever

Signed-off-by: Joe Kelley <jkelley@hubspot.com>
@joekelley
Copy link
Author

Looks like the most recent build was successful except for the PR Labels check. I added a linked issue but don't have permission to remove the NeedsIssue label, could someone take care of that for me?

@timvaillancourt timvaillancourt removed the NeedsIssue A linked issue is missing for this Pull Request label Jul 16, 2024
return fs
}

func (fs *pollingFileSystemManager) poll(ctx context.Context) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this function seem to work well for preventing stalling itself in case the disk is frozen, but I was wondering if there are other opportunities for parts of vitess being stalled and still causing an issue depending on how the environment is setup.

for example, we don't log directly to disk but rely on rsyslog to capture and the STDERR and route the logs appropriately. if the disk is stalled, rsyslog will be stalled, not the vttablet. but perhaps if you log directly to disk via --log_dir and don't have a separate thread handling the logging mechanism, whenever you log a message the particular thread will try to perform an I/O operation and be blocked (we saw this in a different system, not vitess, because we were logging directly to disk).

so I would be curious if you could test with --logdir (or maybe you already are!) and it allows vtorc to properlly detect and failover the faulty host.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting -- I don't see why this check wouldn't work if you set --stalled-disk-write-dir to the same directory as --log_dir in the event that directory could not be written to. Is your suggestion to run the stalled disk check against the --log_dir in addition to whatever data directory the user provides with --stalled-disk-write-dir? Or even to be able to provide multiple directories as arguments and poll each separately?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I was wondering if other threads that might be writing to disk directly could cause other parts of vitess to be I/O blocked, specially if we are logging something. setting both --log_dir and --stalled-disk-write-dir to directories in the same filesystem and freezing it would be a good additional test, in case you are only logging to STDERR right now. if you are already writing to the same filesystem and this is working, great!

Copy link
Contributor

@timvaillancourt timvaillancourt Jul 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the main goal for VTOrc detection/remediations is to prevent availability/durability problems, so if the log dir being unwritable doesn't impact query serving I'm on the fence if that would be a reason to failover, but an unwritable data dir is always a good reason if we are sure

I'm curious what @GuptaManan100 thinks about this detail?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @timvaillancourt. Not being able to log probably shouldn't trigger an automatic failover. If that is a use-case we want to address, I think it would be better to add a metric around not being able to log, and let the users see the metric on vttablet page and if they deem it worthwhile, they can run the ERS manually.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with that, my question is specifically towards if by enabling --logdir and having other threads (including the ones that can impact availability) be blocked because they are trying to log something and the flags makes it go to disk instead (or in addition) to STDERR. if the disk is stalled, they might be blocked as well. I am not worried if we stop logging because the disk is stalled but VTOrc still does the right thing.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. If some tablet thread(s) can't flush logs to disk then some critical work could be blocked.

I had forgotten this but early in my iterations on the feature I believe I encountered this phenomenon. I had many debug logs around my changes in filesystemmanager.go and I found that the filesystem manager was not reaching the line set stalled = true because it was blocked on a log line, presumably because the logs were unable to flush to disk. I have not seen this issue in my testing since removing the log lines from the critical section but it may be possible that a log line elsewhere the tablet code could be breaking in this way.

IMO if we can establish confidence that the StalledDiskPrimary feature is not broken by a blocking log statement we should proceed with this change as-is. Separately it could be worthwhile to audit which (or if any) critical tablet functionality is broken when logs can't be flushed to disk.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I don't mean to block this at all, I think it is a good change to have (we had to have our automation to handle this particular case). But just wanted to mention it might still stall and prevent some failover under some conditions, and it would be good to audit those (not necessarily as part of this PR)

@joekelley
Copy link
Author

We've noticed in our testing that when we run fsfreeze on the primary's vtdataroot to trigger the StalledDiskPrimary recovery, we are left with serving co-primaries as in #14637. We have automation that quickly kills the old primary tablet but I wanted to call attention to that here.

Copy link
Contributor

This PR is being marked as stale because it has been open for 30 days with no activity. To rectify, you may do any of the following:

  • Push additional commits to the associated branch.
  • Remove the stale label.
  • Add a comment indicating why it is not stale.

If no action is taken within 7 days, this PR will be closed.

@github-actions github-actions bot added the Stale Marks PRs as stale after a period of inactivity, which are then closed after a grace period. label Aug 29, 2024
@GuptaManan100
Copy link
Member

Hello @joekelley, just a couple of review comments are left to be addressed - #16050 (comment), before we can merge this PR.

@timvaillancourt
Copy link
Contributor

timvaillancourt commented Aug 29, 2024

@joekelley totally optional/deferrable: something I've wondered since #13207 was written (during 5.7 days with older kernels): is there a more elegant way now for us to detect a stalled disk? Something about writing a file to check the disk has never sat right with me

The two areas of thought I had were:

  1. Is there a signal in performance_schema, show status, etc we're not thinking of?
  2. Is there a value in /prod/$PID/* that already tells us we are blocked on writes?

I'll try to find the time to validate those ideas but I was curious what you thought. Of course, writing a temp file is valid if required

@joekelley
Copy link
Author

Thanks for the feedback! I'll address asap.

(Commenting here to satisfy the stale check because I may not get to the requested changes this week and I can't update labels on the PR)

@deepthi deepthi removed the Stale Marks PRs as stale after a period of inactivity, which are then closed after a grace period. label Aug 29, 2024
@rbranson
Copy link
Contributor

I'm extremely curious why this causes an UnreachablePrimary and not a DeadPrimary. It is surprising that the tablet cannot perform basic status queries against MySQL but there are intact replicating replicas.

@erikhalperin-hub
Copy link

Hey @GuptaManan100 - sorry for the delayed response here, but we've put this PR down and have stopped using StalledDiskPrimary for the time being because of a rare and complicated series of events that caused data loss. What happened went something like:

  1. The disk of the primary stalled
  2. Vtorc detected an SDP and ran an ERS
  3. The old primary demotion got stuck and didn't finish because of what Joe described above. Importantly, demotion did not yet get to the part where it killed the in flight queries
  4. Shortly after, the old primary's disk became healthy again, and there was a race condition between demotion and vtorc that caused semi-sync to be turned off on the old primary before the in-flight queries were killed, so those in flight queries completed and returned success to the client. However, these queries were now errants on a replica, which got wiped. The details on the race condition are a bit hazy.

There is a fix to #14637 but we haven't yet had a chance to try it because we're fairly far behind and there are some pre-requisites to getting that fix out. Ultimately, our plan is to get vitess up-to-date (and keep it there) and then return to the DemotePrimary fix as well as this StalledDiskPrimary.

@erikhalperin-hub
Copy link

I'm extremely curious why this causes an UnreachablePrimary and not a DeadPrimary. It is surprising that the tablet cannot perform basic status queries against MySQL but there are intact replicating replicas.

That's a good question. What we've observed when testing freezing the filesystem with fsfreeze is that mysql "locks up" in many ways but it keeps running. I'm not familiar with replication internals but my guess would be that replicas are querying the primary at a pretty basic networking level which is still responsive here.

@erikhalperin-hub
Copy link

@timvaillancourt

is there a more elegant way now for us to detect a stalled disk? Something about writing a file to check the disk has never sat right with me

The two areas of thought I had were:

  1. Is there a signal in performance_schema, show status, etc we're not thinking of?
  2. Is there a value in /prod/$PID/* that already tells us we are blocked on writes?

I'll try to find the time to validate those ideas but I was curious what you thought. Of course, writing a temp file is valid if required

Writing to disk has been our best signal. Mysql in our tests with fsfreeze is basically completely unqueriable, even for things like show status. We considered using an external metric such as amazon CloudWatch or linux kernel-level metrics but writing to disk seemed to be the simplest solution.

@@ -0,0 +1,131 @@
package tabletmanager
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
package tabletmanager
/*
Copyright 2024 The Vitess Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package tabletmanager

@@ -0,0 +1,103 @@
package tabletmanager
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
package tabletmanager
/*
Copyright 2024 The Vitess Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package tabletmanager

@joekelley
Copy link
Author

Closing this per Erik's comment #16050 (comment)

Thanks for all the feedback and discussion here.

@joekelley joekelley closed this Oct 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: VTorc Vitess Orchestrator integration Type: Enhancement Logical improvement (somewhere between a bug and feature)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature Request: Add support for StalledDiskPrimary recoveries by vtorc
7 participants