-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Present more detailed information from CI results #572
Comments
It sounds like Cervator would like the opportunity to address the maintainability concerns that have come up with Jenkins, and offer Jenkins as a way of providing this sort of interface to build results. There's probably an issue for that in another repo or a card on a trello board somewhere? |
Here's a more concrete example of what's missing from the current view, as compared to Jenkins: (@jdrueckert, I hope this gives you the specifics you were asking for yesterday in discord) |
I'd like to mention Github Checks API in this context: https://developer.github.com/v3/checks/ There are Github Apps integrating with it to annotate the code with what went wrong (see https://github.com/marketplace/check-run-reporter) - maybe an even better option than a nested view in Jenkins? |
Annotations do sound like a great feature, but the thread I linked earlier says they have some significant limitations that mean they can only provide part of the answer. In particular,
|
Looking at Jabref I see that they set up different jobs for different tests in their pipeline - maybe that's something we could do as an intermediate step to get a bit more visibility on build and test results: https://github.com/JabRef/jabref/blob/master/.github/workflows/tests.yml |
That'd be an improvement, yeah. If it presented separate jobs for each of the things On the other hand, with the checks running as fast as they do, the overhead for each one being a job that has to set up its own runtime might be pretty big in comparison. 🤔 I don't think that's a real blocker, anything we use that has this sort of dynamic worker allocation is going to be the same way. It'll still be plenty fast, assuming there's no shortage of workers in the pool. It's just a little resource-hungry. 🚚 |
The current checks that run on pull requests (also known as validation actions, or continuous integration) produce output that looks like a teletype that's trying to save paper by not including stack traces for failing tests.
The output from these tools can be a lot easier to read, and much more informative, to help authors, peers, and mentors discover why something is not passing a test. Without needing to get to their own development machine and check out the branch to try to reproduce the results.
I'm not suggesting we reinvent the wheel or use anything cutting-edge, there's been plenty done in this field. We're using junit, which set the standards that a lot of other tools came to follow, so I hope that makes it easy to find something compatible.
This GitHub Actions forum thread on Publishing Test Results concludes there aren't good options for presenting this information in the limited interface available to the GitHub Action itself, but there are third-party integrations that are free for open source repositories.
There are a couple options that turned up when searching for something GitHub-Action-compatible and might be worth a closer look:
(This would be a more complete way to address #557)
The text was updated successfully, but these errors were encountered: