-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
codec: implement codec v2 #138
base: main
Are you sure you want to change the base?
Conversation
The downside I see to this implementation and the CodecV2 implementation in general is now the hard reliance on This is kinda not ideal when using protobuf in contexts other than gRPC and use codecs for marshaling and unmarshalling. For example we also use vtprotobuf within ConnectRPC (https://connectrpc.com) which effectively is an alternative to gRPC, so is kinda wonky to bring in a dependency on gRPC itself just to utilize this It seems in general more appropriate to me if this Prior to this, vtprotobuf has no runtime dependency on the grpc package, it's only used within the code generator, so this change makes that an explicit runtime dependency which is awkward if you dont' even use grpc. |
I've typically vendored this Codec override into my codebase, but I'm PRing here for review. Maybe this could just be in an examples directory and not directly in vtprotobuf to avoid the dependency? |
@jzelinskie: I don't personally have a strong opinion, but the example approach would fit given that's what is done for supporting both vtproto and standard in a single codec. |
Also, while someone can generate only |
Cool! First off, thanks @jzelinskie for taking the time to contribute and review this with us. @mattrobenolt's concern is certainly valid, but as you've already pointed out, the pattern we use for these kind of adapters is merging them as examples as to not bring specific dependencies (GRPC in this case) to the project. We've done something similar for DRPC and found it generally useful without requiring to bring DRPC as a dependency. Now, for those following at home, the new buffers feature that shipped in GRPC 1.66 actually has a performance regression that fortunately @coxley has figured out and fixed (grpc/grpc-go#7571). So thanks for that Codey! The fix hasn't been merged yet, which means that we can discuss the implementation of this side of the codec but running any actual benchmarks will be pointless until the fix lands and a new minor release is tagged. As for the implementation itself: I think @na--'s comments are on point. Besides those changes, we need to look at benchmarks before we can discuss further. I honestly have no idea of whether this will be a performance improvement at all! I think the killer feature here would be having a separate codegen step that allows serializing directly into In conclusion, my plan here is as follows:
I'll keep y'all posted! |
@vmg Something that may be neat is finally being able to use pooled vtproto objects for unary client and servers. It’d be a bit wonky — it would rely on a custom BufferPool. If the pool and codec could communicate, the uintptr (or weak.Pointer in 1.24) for a pooled slice could be mapped to a vtproto so that on pool.Put, we know we can release it. May be more indirection than is worth it, but the pool interface at least makes it possible to derive the lifetime from gRPCs perspective. |
Now testing vitessio/vitess#16790 in Vitess itself. Hoping this will show up in the benchmarks! |
if m, ok := v.(vtprotoMessage); ok { | ||
size := m.SizeVT() | ||
if mem.IsBelowBufferPoolingThreshold(size) { | ||
buf := make([]byte, 0, size) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a bug on this line IMO and it should go like either make([]byte, size)
or mem.BufferSlice{mem.SliceBuffer(buf[:size])}
on the line 35. Because the original slice reference will have its len still set to 0 effectively returning 0 length byte array from the method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes! Apologies -- I have this fixed in my PR for SpiceDB but didn't carry over the changes here.
This has worked wonderfully in upstream Vitess: vitessio/vitess#16790 (comment) |
This is a follow up from grpc/grpc-go#6619
Go's gRPC implementation added a new
CodecV2
interface that enables integration with their memory pooling logic.Adopting this should vastly improve performance and garbage collection overhead.
I took a stab at an implementation using the upstream version as reference.
In a review I'm looking for confirmation that this is the right approach for pooling and any feedback to make this more robust for a variety of vtprotobuf users. For example, I'm requiring
SizeVT()
here, which not everyone might generate; perhaps I could do a type assertion for that usage.