Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Test Summary - Liberty Tools VS Code - 23.0.6 #311

Open
TrevCraw opened this issue Nov 10, 2023 · 0 comments
Open

Feature Test Summary - Liberty Tools VS Code - 23.0.6 #311

TrevCraw opened this issue Nov 10, 2023 · 0 comments

Comments

@TrevCraw
Copy link
Contributor

Test Strategy

Describe the test strategy & approach for this feature, and describe how the approach verifies the functions delivered by this feature.

Liberty Tools VS Code consists of two main features:

  • UI integration with the VS Code IDE which builds upon the Liberty Maven/Gradle Plugins and VS Code components (like launching of custom start through command palette)
  • Integration of a set of Language Server implementations and extensions for domain-specific code assist : LSP4Jakarta (Jakarta V9/10 APIs) and LCLS (bootstrap.properties and server.env server config files, with XML LS extension for server.xml support)

The test strategy for Liberty Tools VS Code is to make use of a VS Code test framework to drive actions through the UI to perform end to end tests against the core features Liberty Tools VS Code provides (ie. running Liberty actions like start and stop). In the future, there will also be tests driven by the UI framework to test the base functionality provided by the various language server integrations.

List of FAT projects affected

  • N/A

Test strategy

  • What functionality is new or modified by this feature?
    • All functionality is new
  • What are the positive and negative tests for that functionality? (Tell me the specific scenarios you tested. What kind of tests do you have for when everything ends up working (positive tests)? What about tests that verify we fail gracefully when things go wrong (negative tests)? See the Positive and negative tests section of the Feature Test Summary Process wiki for more detail.)
  • What manual tests are there (if any)? (Note: Automated testing is expected for all features with manual testing considered an exception to the rule.)
    • Manual tests are used to cover any gaps that we have in automation as well as confirm the accuracy of the automated tests. The details of manual tests performed can be found here: https://ibm.ent.box.com/notes/1232498470722 (IBM only)
    • Installation verification is done manually once the release is available in the Visual Studio marketplace

Confidence Level

Collectively as a team you need to assess your confidence in the testing delivered based on the values below. This should be done as a team and not an individual to ensure more eyes are on it and that pressures to deliver quickly are absorbed by the team as a whole.

Please indicate your confidence in the testing (up to and including FAT) delivered with this feature by selecting one of these values:

0 - No automated testing delivered

1 - We have minimal automated coverage of the feature including golden paths. There is a relatively high risk that defects or issues could be found in this feature.

2 - We have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here. Error/outlying scenarios are not really covered. There are likely risks that issues may exist in the golden paths

3 - We have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error/outlying scenarios. There is a risk when the feature is used outside the golden paths however we are confident on the golden path. Note: This may still be a valid end state for a feature... things like Beta features may well suffice at this level.

4 - We have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error/outlying scenarios. While more testing of the error/outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide.

5 - We have delivered all automated testing we believe is needed for this feature. The testing covers all golden path cases as well as all the error/outlying scenarios that make sense. We are not aware of any gaps in the testing at this time. No manual testing is required to verify this feature.

Based on your answer above, for any answer other than a 4 or 5 please provide details of what drove your answer. Please be aware, it may be perfectly reasonable in some scenarios to deliver with any value above. We may accept no automated testing is needed for some features, we may be happy with low levels of testing on samples for instance so please don't feel the need to drive to a 5. We need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid. What are the gaps, what is the risk etc. Please also provide links to the follow on work that is needed to close the gaps (should you deem it needed)

1 - We have minimal automated coverage of the feature including golden paths. There is a relatively high risk that defects or issues could be found in this feature.

The level for automated tests coverage is 1 because we do not have any automated tests covering the LS features. However, we did use rigorous manual testing to ensure there were no defects, so our confidence in what is released is much higher.

We are aware of the gaps in our test automation and are working to resolve them as able. Golden path test items are listed below:
#123
#124
#125

All automated test items: https://github.com/OpenLiberty/liberty-tools-vscode/issues?q=is%3Aissue+is%3Aopen+label%3A%22automated+tests%22

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant