-
-
Notifications
You must be signed in to change notification settings - Fork 314
GSoC 2023 Term: AQAvit Assessment and Prioritization
Related to Measure the effectiveness and efficiency of our testing issue and one of the key criteria in the AQAvit manifesto (particularly Section 2.3.3 Codecov & Other Metrics), we want to select Key Performance Indicators (KPI) and metrics that allow us to assess and edit AQvit in order to keep it vital and relevant to those who use it.
This project proposal would have a GSoC intern look at various ways that we assess the test material that we include. This project aims to provide us with tools and ways to help answer the question: "how effective is this test at finding defects?".
This project is proposed to be 350 hrs.
Determine an initial set of KPIs to use within the scope of this project. Some ideas, but not limited to the following:
- defect escape rate (related: https://dzone.com/articles/how-to-measure-defect-escape-rate-to-keep-bugs-out, https://sqa.stackexchange.com/questions/46976/how-to-calculate-visualize-a-defect-escape-metric-in-github)
- defect density (related: https://www.softwaretestinghelp.com/defect-density/)
- adoptium-support/issues in relation to different Java packages or functionality, whether its a usage error, or a real defect and whether that defect has already been found/reported
- The Measure the effectiveness and efficiency of our testing issue mentions things like scan github issues for all defects. A practical approach might be to start with the ProblemList files that are used to exclude tests that are failing. Those files are found in the aqa-tests/openjdk/excludes directory and each entry contains the name of the test, the issue a problem has been reported under and what platforms the test should be excluded on. When one looks at the related issue, is it easy to determine if the root cause of the problem is in test or product code, and in the case of product code. 1 metric might be the number of testcases excluded because of real product code issues, versus test code or infra issues, so
exclusions due to product issues
/total exclusions
. Can you think of an automated way to determine this from the contents of the ProblemList files?
Investigate the current state of the art and available open-source tools that could potentially be used.
Lines of enquiry:
- defect escape analysis (how costs go up when defects escape to the field for customers to find and report / related to defect escape rate KPI)
- code coverage and functional coverage and understand the difference (related: https://8thdaytesting.com/2016/05/04/a-quick-n-dirty-intro-to-combinatorial-test-design/)
- mutation testing (look at pittest.org as an example)
Integrate open-source tooling and/or create scripts/tools that collect and compare the KPIs and generate standard form of report (and can be scheduled to run regularly)
- Additionally, are there things that could be done to help triage (determine root cause of a test failure) faster? Enhancing TKG to report more information about machine environment for example (related: https://github.com/adoptium/TKG/issues/45 and https://github.com/adoptium/TKG/issues/414)
Document the outcomes of the project
For questions and discussion, please join the #gsoc channel in our Slack workspace (https://adoptium.net/slack/).
Mentors: Lan Xia, Renfei Wang, Shelley Lambert
Additional consultant mentors: Sophia Guo