-
-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
console-subscriber
crate has low test coverage
#450
Labels
Comments
hds
added a commit
that referenced
this issue
Jul 18, 2023
The `ConsoleLayer` builder provides the user with a console layer and a server, which is used to start the gRPC server. However, it may be desireable to expose the instrumentation server together with other services on the same Tonic router. This was requested explicitly in #428. Additionally, to add tests which make use of the instrumentation server (as part of improving test coverage for #450), more flexibility is needed than what is provided by the current API. Specifically we would like to connect a client and server via an in memory channel, rather than a TCP connection. This change adds an additional method to `console_subscriber::Server` called `into_parts` which allows the user to access the `InstrumentServer` directly. A handle which controls the lifetime of the `Aggregator` is also provided, as the user must ensure that the aggregator lives at least as long as the instrument server. To facilitate the addition of functionality which would result in more "parts" in the future, `into_parts` returns a non-exhaustive struct, rather than a tuple of parts. Closes: #428
hds
added a commit
that referenced
this issue
Jul 18, 2023
The `ConsoleLayer` builder provides the user with a console layer and a server, which is used to start the gRPC server. However, it may be desireable to expose the instrumentation server together with other services on the same Tonic router. This was requested explicitly in #428. Additionally, to add tests which make use of the instrumentation server (as part of improving test coverage for #450), more flexibility is needed than what is provided by the current API. Specifically we would like to connect a client and server via an in memory channel, rather than a TCP connection. This change adds an additional method to `console_subscriber::Server` called `into_parts` which allows the user to access the `InstrumentServer` directly. A handle which controls the lifetime of the `Aggregator` is also provided, as the user must ensure that the aggregator lives at least as long as the instrument server. To facilitate the addition of functionality which would result in more "parts" in the future, `into_parts` returns a non-exhaustive struct, rather than a tuple of parts. Closes: #428
hds
added a commit
that referenced
this issue
Jul 18, 2023
The `ConsoleLayer` builder provides the user with a console layer and a server, which is used to start the gRPC server. However, it may be desireable to expose the instrumentation server together with other services on the same Tonic router. This was requested explicitly in #428. Additionally, to add tests which make use of the instrumentation server (as part of improving test coverage for #450), more flexibility is needed than what is provided by the current API. Specifically we would like to connect a client and server via an in memory channel, rather than a TCP connection. This change adds an additional method to `console_subscriber::Server` called `into_parts` which allows the user to access the `InstrumentServer` directly. A handle which controls the lifetime of the `Aggregator` is also provided, as the user must ensure that the aggregator lives at least as long as the instrument server. To facilitate the addition of functionality which would result in more "parts" in the future, `into_parts` returns a non-exhaustive struct, rather than a tuple of parts. Closes: #428
hds
added a commit
that referenced
this issue
Jul 18, 2023
The `console-subscriber` crate has no integration tests. There are some unit tests, but without very high coverage of features. Recently, we've found or fixed a few errors which probably could have been caught by a medium level of integration testing. However, testing `console-subscriber` isn't straight forward. It is effectively a tracing subscriber (or layer) on one end, and a gRPC server on the other end. This change adds enough of a testing framework to write some initial integration tests. It is the first step towards closing #450. Each test comprises 2 parts: - One or more "expcted tasks" - A future which will be driven to completion on a dedicated Tokio runtime. Behind the scenes, a console subscriber layer is created and it's server part is connected to a duplex stream. The client of the duplex stream then records incoming updates and reconstructs "actual tasks". The layer itself is set as the default subscriber for the duration of `block_on` which is used to drive the provided future to completioin. The expected tasks have a set of "matches", which is how we find the actual task that we want to validate against. Currently, the only value we match on is the task's name. The expected tasks also have a set of expectations. These are other fields on the actual task which are validated once a matching task is found. Currently, the two fields which can have expectations set on them are the `wakes` and `self_wakes` fields. So, to construct an expected task, which will match a task with the name `"my-task"` and then validate that the matched task gets woken once, the code would be: ```rust ExpectedTask::default() .match_name("my-task") .expect_wakes(1); ``` A future which passes this test could be: ```rust async { task::Builder::new() .name("my-task") .spawn(async { tokio::time::sleep(std::time::Duration::ZERO).await }) } ``` The full test would then look like: ```rust fn wakes_once() { let expected_task = ExpectedTask::default() .match_name("my-task") .expect_wakes(1); let future = async { task::Builder::new() .name("my-task") .spawn(async { tokio::time::sleep(std::time::Duration::ZERO).await }) }; assert_task(expected_task, future); } ``` The PR depends on 2 others: - #447 which fixes an error in the logic that determines whether a task is retained in the aggregator or not. - #451 which exposes the server parts and is necessary to allow us to connect the instrument server and client via a duplex channel. This change contains some initial tests for wakes and self wakes which would have caught the error fixed in #430. Additionally there are tests for the functionality of the testing framework itself.
hawkw
pushed a commit
that referenced
this issue
Jul 28, 2023
The `ConsoleLayer` builder provides the user with a console layer and a server, which is used to start the gRPC server. However, it may be desireable to expose the instrumentation server together with other services on the same Tonic router. This was requested explicitly in #449 428. Additionally, to add tests which make use of the instrumentation server (as part of improving test coverage for #450), more flexibility is needed than what is provided by the current API. Specifically we would like to connect a client and server via an in memory channel, rather than a TCP connection. This change adds an additional method to `console_subscriber::Server` called `into_parts` which allows the user to access the `InstrumentServer` directly. The `Aggregator` is also returned and must be set to run for at least as long as the instrument server. This allows the aggregator to be spawned wherever the user wishes. To facilitate the addition of functionality which would result in more "parts" in the future, `into_parts` returns a non-exhaustive struct, rather than a tuple of parts. Closes: #428
hds
added a commit
that referenced
this issue
Aug 1, 2023
The `console-subscriber` crate has no integration tests. There are some unit tests, but without very high coverage of features. Recently, we've found or fixed a few errors which probably could have been caught by a medium level of integration testing. However, testing `console-subscriber` isn't straight forward. It is effectively a tracing subscriber (or layer) on one end, and a gRPC server on the other end. This change adds enough of a testing framework to write some initial integration tests. It is the first step towards closing #450. Each test comprises 2 parts: - One or more "expcted tasks" - A future which will be driven to completion on a dedicated Tokio runtime. Behind the scenes, a console subscriber layer is created and it's server part is connected to a duplex stream. The client of the duplex stream then records incoming updates and reconstructs "actual tasks". The layer itself is set as the default subscriber for the duration of `block_on` which is used to drive the provided future to completioin. The expected tasks have a set of "matches", which is how we find the actual task that we want to validate against. Currently, the only value we match on is the task's name. The expected tasks also have a set of expectations. These are other fields on the actual task which are validated once a matching task is found. Currently, the two fields which can have expectations set on them are the `wakes` and `self_wakes` fields. So, to construct an expected task, which will match a task with the name `"my-task"` and then validate that the matched task gets woken once, the code would be: ```rust ExpectedTask::default() .match_name("my-task") .expect_wakes(1); ``` A future which passes this test could be: ```rust async { task::Builder::new() .name("my-task") .spawn(async { tokio::time::sleep(std::time::Duration::ZERO).await }) } ``` The full test would then look like: ```rust fn wakes_once() { let expected_task = ExpectedTask::default() .match_name("my-task") .expect_wakes(1); let future = async { task::Builder::new() .name("my-task") .spawn(async { tokio::time::sleep(std::time::Duration::ZERO).await }) }; assert_task(expected_task, future); } ``` The PR depends on 2 others: - #447 which fixes an error in the logic that determines whether a task is retained in the aggregator or not. - #451 which exposes the server parts and is necessary to allow us to connect the instrument server and client via a duplex channel. This change contains some initial tests for wakes and self wakes which would have caught the error fixed in #430. Additionally there are tests for the functionality of the testing framework itself.
hds
added a commit
that referenced
this issue
Sep 6, 2023
The `console-subscriber` crate has no integration tests. There are some unit tests, but without very high coverage of features. Recently, we've found or fixed a few errors which probably could have been caught by a medium level of integration testing. However, testing `console-subscriber` isn't straight forward. It is effectively a tracing subscriber (or layer) on one end, and a gRPC server on the other end. This change adds enough of a testing framework to write some initial integration tests. It is the first step towards closing #450. Each test comprises 2 parts: - One or more "expected tasks" - A future which will be driven to completion on a dedicated Tokio runtime. Behind the scenes, a console subscriber layer is created and its server part is connected to a duplex stream. The client of the duplex stream then records incoming updates and reconstructs "actual tasks". The layer itself is set as the default subscriber for the duration of `block_on` which is used to drive the provided future to completioin. The expected tasks have a set of "matches", which is how we find the actual task that we want to validate against. Currently, the only value we match on is the task's name. The expected tasks also have a set of "expectations". These are other fields on the actual task which are validated once a matching task is found. Currently, the two fields which can have expectations set on them are `wakes` and `self_wakes`. So, to construct an expected task, which will match a task with the name `"my-task"` and then validate that the matched task gets woken once, the code would be: ```rust ExpectedTask::default() .match_name("my-task") .expect_wakes(1); ``` A future which passes this test could be: ```rust async { task::Builder::new() .name("my-task") .spawn(async { tokio::time::sleep(std::time::Duration::ZERO).await }) } ``` The full test would then look like: ```rust fn wakes_once() { let expected_task = ExpectedTask::default() .match_name("my-task") .expect_wakes(1); let future = async { task::Builder::new() .name("my-task") .spawn(async { tokio::time::sleep(std::time::Duration::ZERO).await }) }; assert_task(expected_task, future); } ``` The PR depends on 2 others: - #447 which fixes an error in the logic that determines whether a task is retained in the aggregator or not. - #451 which exposes the server parts and is necessary to allow us to connect the instrument server and client via a duplex channel. This change contains some initial tests for wakes and self wakes which would have caught the error fixed in #430. Additionally there are tests for the functionality of the testing framework itself. Co-authored-by: Eliza Weisman <eliza@buoyant.io>
hawkw
pushed a commit
that referenced
this issue
Sep 29, 2023
The `ConsoleLayer` builder provides the user with a console layer and a server, which is used to start the gRPC server. However, it may be desireable to expose the instrumentation server together with other services on the same Tonic router. This was requested explicitly in #449 428. Additionally, to add tests which make use of the instrumentation server (as part of improving test coverage for #450), more flexibility is needed than what is provided by the current API. Specifically we would like to connect a client and server via an in memory channel, rather than a TCP connection. This change adds an additional method to `console_subscriber::Server` called `into_parts` which allows the user to access the `InstrumentServer` directly. The `Aggregator` is also returned and must be set to run for at least as long as the instrument server. This allows the aggregator to be spawned wherever the user wishes. To facilitate the addition of functionality which would result in more "parts" in the future, `into_parts` returns a non-exhaustive struct, rather than a tuple of parts. Closes: #428
hawkw
added a commit
that referenced
this issue
Sep 29, 2023
The `console-subscriber` crate has no integration tests. There are some unit tests, but without very high coverage of features. Recently, we've found or fixed a few errors which probably could have been caught by a medium level of integration testing. However, testing `console-subscriber` isn't straight forward. It is effectively a tracing subscriber (or layer) on one end, and a gRPC server on the other end. This change adds enough of a testing framework to write some initial integration tests. It is the first step towards closing #450. Each test comprises 2 parts: - One or more "expected tasks" - A future which will be driven to completion on a dedicated Tokio runtime. Behind the scenes, a console subscriber layer is created and its server part is connected to a duplex stream. The client of the duplex stream then records incoming updates and reconstructs "actual tasks". The layer itself is set as the default subscriber for the duration of `block_on` which is used to drive the provided future to completioin. The expected tasks have a set of "matches", which is how we find the actual task that we want to validate against. Currently, the only value we match on is the task's name. The expected tasks also have a set of "expectations". These are other fields on the actual task which are validated once a matching task is found. Currently, the two fields which can have expectations set on them are `wakes` and `self_wakes`. So, to construct an expected task, which will match a task with the name `"my-task"` and then validate that the matched task gets woken once, the code would be: ```rust ExpectedTask::default() .match_name("my-task") .expect_wakes(1); ``` A future which passes this test could be: ```rust async { task::Builder::new() .name("my-task") .spawn(async { tokio::time::sleep(std::time::Duration::ZERO).await }) } ``` The full test would then look like: ```rust fn wakes_once() { let expected_task = ExpectedTask::default() .match_name("my-task") .expect_wakes(1); let future = async { task::Builder::new() .name("my-task") .spawn(async { tokio::time::sleep(std::time::Duration::ZERO).await }) }; assert_task(expected_task, future); } ``` The PR depends on 2 others: - #447 which fixes an error in the logic that determines whether a task is retained in the aggregator or not. - #451 which exposes the server parts and is necessary to allow us to connect the instrument server and client via a duplex channel. This change contains some initial tests for wakes and self wakes which would have caught the error fixed in #430. Additionally there are tests for the functionality of the testing framework itself. Co-authored-by: Eliza Weisman <eliza@buoyant.io>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What crate(s) in this repo are involved in the problem?
console-subscriber
What is the issue?
The
console-subscriber
crate has no integration tests. There are some unit tests, but the coverage isn’t great.Recently, we’ve found or fixed a few of issues which probably could have been caught with a bit of testing. For example the issue #378 (fixed in #440), and the fixes #430 and #447.
Possible solution
Ideally, we would like a way to test the
console-subscriber
end to end, starting with an instrumented tokio runtime and ending with a validation of tasks received by a client to the gRPC server.Much of this setup is likely to be duplicated between tests, so some of that test framework should be factored out into common code. Although we likely don’t need anything as complete as
tracing-mock
in the tracing project.Would you like to work on fixing this bug?
yes, I already am
The text was updated successfully, but these errors were encountered: