-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Send a batch of blocks #35
Conversation
This stack of pull requests is managed by Graphite. Learn more about stacking. |
if resp.StatusCode != http.StatusOK { | ||
return fmt.Errorf("unexpected status code: %v, %v", resp.StatusCode, resp.Status) | ||
} | ||
responseStatus = resp.Status |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i thought you need a header to say:
req.Header.Set(x-dune-batch-size", len(blocks)) or something, to simplify the server side, otherwise the server side needs to...
humm.. you're right, we don't need it, because "opstack" means 3 messages per block, so the server can derive how many blocks it has received.
when we use non-opstack, we we might need to pass additional information in the headers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, doesn't hurt to declare this in a header, that way we know what to expect!
Thanks for the feedback! I probably didn't make it very clear, but the Dune API client part was a bit rushed, but I know what to do there, I think. I do need more eyes/opinions on the Ingester part though |
This PR changes the node indexer from sending one block at a time to sending a batch of blocks. Earlier we implemented concurrent block fetching with buffering (#32). On a configurable interval (defaults to every second), we now check the buffer and send all possible blocks.
This is still a WIP and probably should be split into two PRs
We cannot use/merge this until we have support for batch requests on the API.