Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remote call rate #51

Merged
merged 26 commits into from
Jun 18, 2024
Merged

Remote call rate #51

merged 26 commits into from
Jun 18, 2024

Conversation

ThetaSinner
Copy link
Member

No description provided.

@ThetaSinner ThetaSinner requested a review from a team June 14, 2024 17:36
Comment on lines +5 to +6
signal_url: "wss://signal.holo.host"
bootstrap_service: "https://bootstrap.holo.host"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably write a story to move away from production services for performance testing.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, absolutely - #52

.into_iter()
.map(|info| AgentPubKey::from_raw_36(info.agent.0.clone()))
.filter(|k| k != cell_id.agent_pubkey()) // Don't call ourselves!
.collect::<Vec<_>>())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.shuffle()?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that makes sense. It's possible that because we start peers with some ordering, that they discover each other with some ordering and end up with several making calls to each other at the same time - evening that out is a good idea

ctx.runner_context()
.executor()
.execute_in_place(async move {
client.shutdown(agent_id.clone(), None, None).await?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know we run reset as part of the startup, but do we want to run it as part of shutdown too (maybe with let _ = ) so we don't leave the databases of previous tests laying around?

Copy link
Member Author

@ThetaSinner ThetaSinner Jun 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, particularly for tests that create more data than this one does, that's probably a good pattern to put in place.

Added this to the agent tear down and consumed the error at that level


let credentials = client
.authorize_signing_credentials(
agent_id.clone(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Originally I was confused thinking agent_id here was the agent pubkey, but no, this is an indicator to the multi client which sub-client we're talking to? I don't know if a different variable name would be clearer.

Also, is it true that since we set up the init with unrestricted access, this authorize_signing_credentials isn't strictly necessary... except that the calls need to be signed by something?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct, it's nothing to do with the agent pubkey - I've renamed it to agent_name which doesn't imply a unique identifier and it hopefully clearer? What that field is for has documentation on the getter function if you're working in the code but I agree it's confusing once it's being reviewed or has been copied into a variable.

Correct that we have unrestricted zome call access but it's the zome calls from the scenario to the conductor that we're setting up signing credentials for. It's effectively pointless when we have access to the admin websocket anyway because we already have complete control over the conductor... but that's what the API requires. Hopefully now that it's hidden in a common function, scenario authors won't have to worry too much about that.

neonphog
neonphog previously approved these changes Jun 14, 2024
Copy link
Contributor

@neonphog neonphog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks amazing! 🎉


// Best effort to remove data and cleanup.
// You should comment out this line if you want to examine the result of the scenario run!
let _ = reset_trycp_remote(ctx);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this work after we disconnect the trycp_client the line above?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely not :)

neonphog
neonphog previously approved these changes Jun 17, 2024
Copy link
Contributor

@neonphog neonphog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

woot 👍

@ThetaSinner ThetaSinner merged commit 1e0865b into main Jun 18, 2024
1 check passed
@ThetaSinner ThetaSinner deleted the remote-call-rate branch June 18, 2024 12:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants