Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: vtgateproxy #16045

Open
demmer opened this issue Jun 4, 2024 · 8 comments
Open

RFC: vtgateproxy #16045

demmer opened this issue Jun 4, 2024 · 8 comments
Labels

Comments

@demmer
Copy link
Member

demmer commented Jun 4, 2024

Feature Description

RFC: vtgateproxy

Proposal to introduce a new vtgateproxy vitess component.

This process would sit between the client application (typically as a sidecar on the same host) and the vtgate tier and would proxy connections from clients using the mysql protocol and then forward to vtgates over grpc.

Motivation

The main motivation is to improve performance and reliability on client application runtimes (notably including hhvm which is what is run at Slack) that do not natively support GRPC or the ability to pool mysql protocol connections across application requests. As a result, each application request needs to establish a new tcp connection to the vtgate to execute the mysql protocol.

The resulting connection churn puts load on the network and on the vtgate tier, and results in application delays. Therefore, by running a forward proxy as a sidecar to the application server, the mysql protocol connections remain on the same host, and then the sidecar can maintain long-lived connections to the vtgate. For context, at Slack's scale, the client application establishes more than 2M tcp connections per second to the vtgate tier. Most of these are very short-lived, only running one or two queries, and so the overhead of establishing the connection and tearing it down is substantial in context.

The key benefit of the proxy is therefore that by translating from the mysql protocol to GRPC over the wire, clients get the benefits of pooled transport connections as well as the other load balancing and failover features of GRPC.

This functionality is somewhat akin to the open source proxysql except of course that the outbound protocol is grpc and not the mysql protocol, and therefore the proxy does not have to be as complicated to manage connection state.

At some point in vitess' history there was an l2vtgate process, which accomplished a similar goal of reducing inbound connections between the application and the vtgates, though this was removed due to lack of use and complexity (?).

Design + Proof Of Concept

We are developing and testing a work in progress implementation inside Slack's environment:
slackhq#385

The proxy itself is fairly straightforward. It includes the existing mysql server module and grpcvtgateconn. Handling requests is a basic matter of connecting the mysql server handlers to the client-side API for vtgateconn. The GRPC vtgate protocol already supports abstracting multiple client Sessions on the same transport connection, so the proxy can simply map a mysql protocol connection to a Session object in the vtgateconn.

In general, queries are proxied without parsing, normalization, or inspection. One exception is that the use statement is handled locally within the proxy to avoid an unnecessary round trip to vtgate just to set the TargetString in the Session object. This is somewhat important because the mysql_server module always executes a use statement at the time when a connection is established to set the "dbname" for the connection.

GRPC itself supports various forms of discovery and load balancing. As vitess does not natively handle vtgate service discovery or load balancing, leaving that up to the client application (nor is it proposed that it need to), the proxy itself will need to be flexible as well.

The implementation in the branch above supports a discovery implementation that suits Slack's needs, based on watching a JSON file which contains the vtgate host information. To support the ability for the application to target different vtgate pools for different workloads (which again is a requirement for Slack), the branch leverages mysql protocol connection attributes so rather than targeting different pools of vtgates directly, the application includes this information along with the connection. The implementation also supports cell-local affinity, i.e. in an AWS environment, clients will prefer vtgates in the same availability zone.

The proxy can support multiple grpc load balancing algorithms. In addition to supporting the builtin pick_first and round_robin balancers, the prototype above also supports a bespoke first_ready balancer. In this implementation, when the discovery layer chooses a set (configurable) number of downstream targets (respecting the pool target and az affinity as described above), it will then attempt to establish GRPC subconns to all of them. However it will actually send all requests to the first available vtgate. This is essentially a hybrid of the two built-in approaches: Similar to pick_first, a given client will generally send requests to only a single downstream vtgate. However if that vtgate fails, the balancer can quickly try a set of "standby" alternatives, much like round_robin. We are still experimenting with this approach at Slack.

Status

At this point we'd like to solicit feedback from the Vitess community on this proposal and the overall approach with the goal of merging upstream and including the component as part of the standard Vitess distribution.

We would aim to validate the approach internally within Slack, as we are in the midst of doing now to vet out the implementation and prove the value.

Use Case(s)

As mentioned above, this proxy would potentially benefit any environment in which the runtime cannot maintain persistent mysql connections or natively integrate with GRPC.

@demmer demmer added Type: Feature Request Needs Triage This issue needs to be correctly labelled and triaged labels Jun 4, 2024
@makmanalp
Copy link

We (HubSpot) mainly use the grpc vtgate protocol, but I wanted to mention that another potential use case to have a proxy is upgrades.

We do a blue/green style upgrade where we can migrate to a new vitess version keyspace by keyspace (in the thousands so this is necessary), and we can upgrade or downgrade a keyspace by reparenting to an upgraded or downgraded cell, and flipping which cell (i.e. which vtgates) an app instances talk to via DNS. The DNS bit adds a some cruft and some amount of propagation delay. Instead you could imagine doing a cell flip in a proxy layer like this.

@makmanalp
Copy link

Then again the more I think it doesn't make sense for us since we already use the grpc java driver: if this is to be a sidecar, at that point we would probably do that flip in the driver (or our wrapper of it) than to have the overhead another proxy layer 🤔 But for users of the mysql protocol it could be a neat plus.

@harshit-gangal harshit-gangal added Type: RFC Request For Comment and removed Needs Triage This issue needs to be correctly labelled and triaged labels Jun 24, 2024
@deepthi
Copy link
Member

deepthi commented Aug 6, 2024

I don't think Vitess should take on the responsibility of providing a proxy service from mysql -> grpc. Some of the reasons for this opinion are structural, while some are more nebulous and subject to revision based on community feedback.

  • Introducing a new component (even if it is optional), means that we need to now start maintaining it, maintain RPC compatibility, add CI tests to ensure we are not breaking upgrade/downgrades, provide a way to deploy it using the k8s operator etc. This will be a significant amount of ongoing time commitment.
  • We would need to hear from others in the community that there is a strong need for an additional new component. I'm skeptical because most people seem to be either using a framework that does connection pooling, or using grpc (or the java variant of it). As you correctly called out, there was the concept of l2vtgate which was removed because no one seemed to be using it.
  • It seems to me that each Vitess implementation might have slightly different needs and it will be difficult to reconcile all of them and implement a generic solution.
  • Doing vtgate discovery via JSON file is not feasible in a kubernetes environment, so an alternate discovery mechanism is needed

My recommendation would be to keep this as part of Slack's tooling versus putting it in open source Vitess.

@mattlord
Copy link
Contributor

mattlord commented Aug 6, 2024

At a quick glance this seems nice. An OSS version — at a very high level anyway — of an edge/network layer which larger individual companies often implement in some form. I do feel strongly that if it gets merged into Vitess that it needs to be optional and experimental. Still, I can see where the maintenance burden could be quite high.

My personal preference would be that this be made an open source 3rd party package — outside of the official Vitess project org — and we could mention it in the official docs or something so that Vitess users are made aware of it and can try it as an optional third party component. That way the Vitess maintainers don’t have to take on the continuing development, testing, enhancing, build/packaging, support, documentation etc. work indefinitely—or risk it becoming another contributed component that gets largely abandoned (like e.g. the mysql group replication plugin and vitess-mixin code). How this would work with the k8s operator e.g. I have no idea. And I would imagine that people will want new features, different (optional) behaviors, will certainly encounter bugs, etc. It's a lot of work for the Vitess maintainers to take on another component like this (vtadmin being the most recent one where the original developers no longer actively work on it).

With the 3rd party OSS path, if it then becomes popular enough and widely used in production by a number of Vitess users over time, then we could reconsider making it an official component.

@rbranson
Copy link
Contributor

rbranson commented Aug 6, 2024

I have two clarifying questions that might help provide feedback on gaps within Vitess, regardless of the acceptance of this solution:

  1. Why not colocate vtgate with the application tier, assuming that would accomplish the same goal? If you can't, what are the limitations which prevent that from being a solution?

  2. What are the benefits of "standalone" termination of the MySQL protocol into gRPC calls? Another way to ask this might be: what about this design is better than a generic MySQL protocol proxy?

@henryr
Copy link

henryr commented Aug 8, 2024

Why not colocate vtgate with the application tier, assuming that would accomplish the same goal? If you can't, what are the limitations which prevent that from being a solution?

As I understand it, the vtgates perform full-mesh health checking of the tablets so that they can route around failures. We found that running one vtgate per client node did not scale well as a result.

What are the benefits of "standalone" termination of the MySQL protocol into gRPC calls? Another way to ask this might be: what about this design is better than a generic MySQL protocol proxy?

Vitess-over-gRPC is much more suitable for session multiplexing over a single connection. gRPC is built for multiple requests in flight at once on one transport connection, whereas the MySQL protocol I think requires blocking until a query finishes before the underlying TCP connection can be reused. Session state is also more complex and requires an intelligent proxy to push and pop session variables before switching to a new session; something that is naturally handled with metadata in vitess-over-gRPC.

We run a lot of queries concurrently, and so really want the benefit that proper multiplexing gives us.

@demmer
Copy link
Member Author

demmer commented Aug 8, 2024

@deepthi / @mattlord while I'm sympathetic to the arguments that taking this on as a component does bring some burden, I do question a bit about what the "threshold" is to qualify as "significant community interest".

Admittedly this is a bit of a "flex", but Slack is undoubtedly one of (if not the ?) largest adopters of Vitess, used by millions of people daily, running tens of millions of queries on tens of thousands of tablets. As you know we've historically been significant contributors to the project (even if that has waned somewhat of late) and are quite heavily invested in the success of Vitess.

So, while I would not advocate adding things to the project that are entirely Slack-specific, we have a long history of building things that we knew for certain that we would use, and which might be used by other adopters. The recent tabletbalancer PR #16351 is one more example. So I'm not seeing why we necessarily have a different threshold for this component.

IMO the best way for Vitess to flourish as a project is to accept things which help adoption across a wide range of environments. I think it is fair to say that we do need to make this more adaptable, e.g. the discovery is something we'd want to make more flexible and pluggable, and would be happy to talk about how to best get that done and decide if the pros / cons are worth it.

But at the end of the day I don't think that a potentially valuable contribution should need to go on a popularity tour to gain support before accepting.

@mattlord
Copy link
Contributor

mattlord commented Aug 9, 2024

@demmer speaking only for myself here...

The biggest aspect which makes this different is that it's a new component. There are approximately 8 maintainers that are doing the day-to-day work on the project (project/program management, community support, documentation, releases, CI/test work, issue triage, PR reviews, bug fixes and feature requests from the community, etc). It's a virtual certainty that having this new component would take up a considerable amount of time for this group every release cycle. That's time that can then not be spent on other things that the maintainers and community at large believe is important and broadly useful to the project. Given the real world constraints, when deciding to work on or accept new things the level of community interest and the estimated value it would bring to the project is always a consideration. It's not only reasonable, I would argue it's objectively the responsible and correct thing to do as maintainers of a popular OSS project.

It may very well be true that a number of users will find great value in this. My point is that I don't see clear signals that this is true yet. Having this as a 3rd party OSS piece that the project helps raise awareness of is simply one potential way that this could continue to be developed and maintained — by those that are interested in it — while offering a way to clearly demonstrate interest across the user base. Otherwise we can use this RFC as the mechanism to do that. To the best of my knowledge, no other users in the community have yet clearly expressed an interest in using this in their deployments. Again, it's not a judgement on the feature or quality or anything else — but rather a cost-benefit analysis based on what little info we currently have. Another good and typical way to try and move it forward and gauge interest would be to bring this up for discussion with the broader Vitess community at an upcoming monthly community meeting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants