Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: aes-135-define-a-network-where-we-only-write-to-one-node-and-read #185

Conversation

samika98
Copy link
Contributor

Check the parent for description : #184

Copy link

linear bot commented Jun 17, 2024

@samika98 samika98 requested review from dav1do and smrz2001 June 17, 2024 00:39
@samika98 samika98 self-assigned this Jun 17, 2024
@samika98 samika98 changed the title feat: validate sync feat: aes-135-define-a-network-where-we-only-write-to-one-node-and-read Jun 17, 2024
Copy link
Contributor

@dav1do dav1do left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice! have a few questions/comments but nothing major 🚀

runner/src/scenario/ceramic/new_streams.rs Show resolved Hide resolved
if store_in_redis {
let mut conn: tokio::sync::MutexGuard<'_, MultiplexedConnection> = conn.lock().await;
let stream_id_string = response.to_string();
let _: () = conn.sadd("anchor_mids", stream_id_string).await.unwrap();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

continuing my comment from above, if the connection is dead, we could grab user_data.redis_cli() and make/store a new one

peer: &Peer,
stream_id: String,
) -> Result<bool, anyhow::Error> {
let client = reqwest::Client::new();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

creating a new client is heavy (it has an internal threadpool). we should get one of out of the user_data object/a local static/or pass it in as a parameter.

See https://docs.rs/reqwest/latest/reqwest/struct.Client.html specifically The Client holds a connection pool internally, so it is advised that you create one and reuse it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not see how we can use the GooseUser outside a GooseTransaction. We can create a static client and use that. For now I can create it in simulate.rs

@@ -207,20 +237,24 @@ async fn instantiate_small_model(
user: &mut GooseUser,
store_in_redis: bool,
conn: Arc<tokio::sync::Mutex<MultiplexedConnection>>,
only_once_per_network: bool,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only_once_per_network confused me at first, I was thinking it was once globally, not one writer... maybe we can call it one_sided_test or one_writer_per_network or something? once makes me think literally once and that doesn't seem to be the intention.

@samika98 samika98 requested a review from dav1do June 17, 2024 21:18
@@ -30,6 +30,7 @@ serde_ipld_dagcbor = "0.6"
serde_ipld_dagjson = "0.2"
schemars.workspace = true
serde_json.workspace = true
once_cell = "1.19.0"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice, just an fyi you can use a OnceLock from the std library now, but I prefer the API on once_cell tbh

@samika98 samika98 force-pushed the feature/aes-84-validate-sync-and-nightly branch from 4cc9317 to df3a648 Compare June 18, 2024 15:12
@samika98 samika98 merged commit ccad158 into feature/aes-84-validate-correctness-in-keramik-ceramic-anchoring-benchmark Jun 18, 2024
1 check passed
@samika98 samika98 deleted the feature/aes-84-validate-sync-and-nightly branch June 18, 2024 15:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants