-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remote call rate #51
Remote call rate #51
Conversation
signal_url: "wss://signal.holo.host" | ||
bootstrap_service: "https://bootstrap.holo.host" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably write a story to move away from production services for performance testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, absolutely - #52
.into_iter() | ||
.map(|info| AgentPubKey::from_raw_36(info.agent.0.clone())) | ||
.filter(|k| k != cell_id.agent_pubkey()) // Don't call ourselves! | ||
.collect::<Vec<_>>()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.shuffle()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that makes sense. It's possible that because we start peers with some ordering, that they discover each other with some ordering and end up with several making calls to each other at the same time - evening that out is a good idea
bindings/trycp_runner/src/common.rs
Outdated
ctx.runner_context() | ||
.executor() | ||
.execute_in_place(async move { | ||
client.shutdown(agent_id.clone(), None, None).await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know we run reset as part of the startup, but do we want to run it as part of shutdown too (maybe with let _ =
) so we don't leave the databases of previous tests laying around?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, particularly for tests that create more data than this one does, that's probably a good pattern to put in place.
Added this to the agent tear down and consumed the error at that level
bindings/trycp_runner/src/common.rs
Outdated
|
||
let credentials = client | ||
.authorize_signing_credentials( | ||
agent_id.clone(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Originally I was confused thinking agent_id here was the agent pubkey, but no, this is an indicator to the multi client which sub-client we're talking to? I don't know if a different variable name would be clearer.
Also, is it true that since we set up the init with unrestricted access, this authorize_signing_credentials
isn't strictly necessary... except that the calls need to be signed by something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct, it's nothing to do with the agent pubkey - I've renamed it to agent_name
which doesn't imply a unique identifier and it hopefully clearer? What that field is for has documentation on the getter function if you're working in the code but I agree it's confusing once it's being reviewed or has been copied into a variable.
Correct that we have unrestricted zome call access but it's the zome calls from the scenario to the conductor that we're setting up signing credentials for. It's effectively pointless when we have access to the admin websocket anyway because we already have complete control over the conductor... but that's what the API requires. Hopefully now that it's hidden in a common function, scenario authors won't have to worry too much about that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks amazing! 🎉
|
||
// Best effort to remove data and cleanup. | ||
// You should comment out this line if you want to examine the result of the scenario run! | ||
let _ = reset_trycp_remote(ctx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this work after we disconnect the trycp_client the line above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely not :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
woot 👍
No description provided.