-
-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
allTestsInfo may be incorrect #860
Comments
sanity.openjdk looks correct to me https://trss.adoptium.net/allTestsInfo?buildId=65fb17d643ff67006ef89f93&limit=5 |
Indeed (so raised this issue to check what is happening in the extended jobs to be different from sanity), one thing is that its 3 child jobs under the extended.openjdk, versus sanity.openjdk whose results would be parsed from single console log. |
Actually seems |
From TRSS history, the Jenkins test build https://ci.adoptium.net/job/Test_openjdk22_hs_extended.openjdk_x86-64_mac_testList_0/3/ got created on Sep 22, 2023. The root build was https://ci.adoptium.net/job/build-scripts/job/openjdk22-pipeline/69 (no longer exists). Then
I think we had this situation before. |
Aw sucky. I am not sure we can guarantee that jobs won't be deleted underneath Jenkins. I remember answering the question "can we delete these", and saying yes, but I had assumed the deletion would happen in the Jenkins GUI, which would have meant that the job ID count would not have been lost. They must have been removed by logging on to the Jenkins server and deleting the workspaces. Should we consider using a key that includes the parent IDs (for the TRSS DB index)? sigh... |
Ah gotcha - having read this this issue is potentially related to the removal of the testList jobs as per adoptium/infrastructure#2774 (comment) - that would make sense and would have reset the counters to zero as the jobs will have been regenerated on demand. The issue was not the individual job runs (so removing those would have made no difference in terms of solving the problem) but the testList jobs themselves which needed to be regenerated as they were the cause of the parameter errors in the logs, which is why they were deleted as I wasn't aware of a way to refresh them all (They seemed to be causing warnings regardless of whether they were being invoked from what I could see). While in this case I did do the work directly on the filesystem to avoid a lot of clicking, I believe that deleting the job definition via the UI (which is what would have been necessary to force regen) would have had the same effect in terms of "losing" the last build number, since it would have removed all trace of the job including that information. |
Just to clarify, if a Jenkins test job got regenerated, the previously executed job history will cause warnings? If this is the case, it should be a Jenkins issue. The previously executed job history should be static. |
@smlambert noticed that recent release run shows allTestsInfo may be incorrect for some builds.
Example : jdk22 release mac extended.openjdk job, which should have around 16+39+38 tests. But trss only shows 3 ( Pre and Post tests are not taken as tests).
https://trss.adoptium.net/allTestsInfo?buildId=65fb17d643ff67006ef89f92&limit=5&hasChildren=true
It happened to all jobs with rerun. Only tests of rerun will show. Expected behaviour is tests should combine original run and rerun.
This might be related with recent update in TKG adoptium/TKG#510.
The text was updated successfully, but these errors were encountered: