Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accept transactions from network #4

Closed
wants to merge 3 commits into from
Closed

Accept transactions from network #4

wants to merge 3 commits into from

Conversation

asonnino
Copy link
Collaborator

Do we like this strategy to accept client transactions from the network?

Comment on lines 72 to 76
if self.pending_transactions.len() >= Self::MAX_PENDING_TRANSACTIONS {
tracing::warn!("Too many pending transactions, dropping transaction");
} else {
self.pending_transactions.push_back(transaction);
}
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not a fan of this strategy to apply back pressure. Could we do it directly at the network level?

@asonnino asonnino requested a review from andll June 28, 2023 12:16
@@ -34,12 +41,15 @@ pub struct RealBlockHandler {
committee: Arc<Committee>,
authority: AuthorityIndex,
pub transaction_certified_latency: PreciseHistogram<Duration>,
rng: StdRng,
/// Transactions that are pending to be included in a block.
pending_transactions: VecDeque<Transaction>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One possible suggestion on how you could implement a back pressure:

Instead of having pending_transactions: VecDeque, you could make it tokio::mpcs::Receiver<Vec< Transaction>>.

This would have few benefits

(a) You can simplify the interface - no need to expose BlockHandler:: handle_transactions method, and pass-through methods in the Syncer/etc. In that case core is basically unaware of transaction submission part.
(b) From the RealBlockHandler::new() you would return a tuple of (RealBlockHandler, tokio::mpcs::Sender<Vec<Transaction>>)
(c) In the main, you could pass the transaction sender (e.g. previously created Sender<Vec<Transaction>>) to the task that takes requests from clients.
(d) In terms of perf benefits this will remove the write lock on the core when user submits transactions.
(e) In terms of backpressure, the task that handles user requests could do async call to sender.send() which will provide back pressure if channel is full.
(f) Small note - the RealBlockHandler::handle_blocks is sync(as the rest of the core), but you could use Receiver::try_recv in the loop to get the transactions up to MAX_TRANSACTIONS_PER_BLOCK

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will try to change that on Tuesday when I am back

@andll
Copy link
Collaborator

andll commented Jul 5, 2023

I thought little bit more about the network interface - I think the approach you have in PR is actually good in terms of long term thinking. The only thing is that we need to update network config, so that it contains information about the peer, e.g. something like

enum Peer {
   Authority(AuthorityIndex),
   Client,
}

And then type of the messages we can accept will depend on the peer type - we would not accept authority messages from the Client for example, and won't allow to sending transactions directly from one authority to another (authorities can create a block for this, if they want to share a transaction).

In the long run the "Client" will likely be a full node (and this is why I think your approach makes sense, because full node might also reuse other APIs such as subscribing to the block).

@asonnino asonnino mentioned this pull request Jul 12, 2023
@asonnino
Copy link
Collaborator Author

This PR is parked for now, keeping it open to not loose @andll comments. I will get back to it once we have a decent synchroniser.

@asonnino asonnino marked this pull request as draft July 27, 2023 11:33
@asonnino
Copy link
Collaborator Author

Close in favour of #23

@asonnino asonnino closed this Sep 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants