-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improvements to testing tooling #7162
Conversation
75da53f
to
0875d6e
Compare
cda60aa
to
4190be8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Starting to test this out. First things I've noticed is that we need to create the /workdir directory, otherwise it seems to immediately fail. I'll benchmark tests on dev too, since this feels pretty slow in comparison:
Looks like it's also using my settings file for the previously loaded build. That won't matter on GH actions, but could be something that trips someone up locally.
Did you just use PathOfBuilding/docker-compose.yml Lines 31 to 33 in 61b3d10
That umask illegal mode is odd. The tests have always been much slower when running locally than the GH runner. Not sure why that is. Open to suggestions on how it could be improved. Not much has changed for the unit tests container though. It should use the same versions of lua and luajit though the versions of busted may be different as they're the latest ones in the new container.
Iirc that's what the old tests did so i didn't change it. Could be something to improve. EDIT: Freshly pulled |
The values of the output are now sorted in the xml before being passed into xmllint to prevent them from jumping around. I've thought about writing a custom diff tool but honestly i'm not sure what kind of diff output would be most useful. |
Seems like the newer version of luajit is a good bit slower. |
CONTRIBUTING.md
Outdated
### Debugging tests | ||
When running tests with a docker container it is possible to use emmylua for debugging. Paste in the following right under `function launch:OnInit()` in `./src/Launch.lua`: | ||
```lua | ||
package.cpath = package.cpath .. ";/usr/local/bin/?.so" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I understand the FHS, you need to put that EmmyLua library in /usr/local/lib/
instead. /usr/local/bin/
is for executable (binary) files, while emmy_core.so
is a shared object (a library). See https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch04s09.html#purpose24
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's true. According to FHS emmy_core.so should go into /usr/local/lib/ but for some reason it's setup to go into bin in EmmyLua's cmake install target: https://github.com/EmmyLua/EmmyLuaDebugger/blob/2b2485d2a4bf6fce80aa05d7f93cf1bfad58a1e8/emmy_core/CMakeLists.txt#L27-L31
I didn't dig into why it was that way. Calling make install puts it there:
Line 33 in 8394e2f
RUN --mount=type=cache,from=emmyluadebugger,source=/opt,target=/opt make -C /opt/EmmyLuaDebugger/build/ install |
TODO:
|
f16f6cf
to
ab7489e
Compare
Setting as ready for review again. Implemented ability to choose source the reference branches. Implemented better build xml caching so as not to hammer build providers. Implemented warnings when a specific build takes longer than 10% to compute. Only concern is the dawidd6/action-download-artifact action: https://www.legitsecurity.com/blog/github-actions-that-open-the-door-to-cicd-pipeline-attacks I don't think the attack vector explained in the post applies here as the files downloaded from the artifact are never executed and are always extracted to dedicated directories or directories not meant to contain any executable code. Nevertheless i would like someone to triple check that there isn't some kind of a way to combine this with a zip-slip style vulnerability (those were found before in the used zip lib https://security.snyk.io/package/npm/adm-zip) to get code exec or if there is a better way of doing this (thought of manually using the github api but not sure if that's a good idea). |
If you didn't want to use that other action to download artifacts, you could instead set the cache key to today's date, skipping the build list update on a cache hit and refreshing the list on a cache miss (AKA a new day has started). Wouldn't even have to update builds on a cron anymore either, just would update the list whenever needed. |
@Wires77 I wanted it to update with cron as i'm worried there may be long periods of time without pull requests. Github only stores cache for 7 days and there is no way to alter that so if there happens to be a period of time longer than 7 days without any pull requests, the list would be lost and it would fall back to the static one. Which would bring the stale build list problem back. |
Why would it fall back to the static one? The cache hit would fail if a test is run after 7 days, and then would run the step that downloads a new build list. This would occur on PRs, not just the base repo. Correct me if I'm misunderstanding the current workflow |
Problem is the build list update action is limited by the number of builds returned by the api. Which currently is like 3-5 new ones per update so it pads it with the static list to ensure a good sample size. Right now the entire build list update logic is based on accumulating the 3-5 new builds returned by the api everyday and rotating out old ones to keep the list fresh. Since the list didn't have enough time to accumulate the new builds i pad it with the static list to ensure a decent sample size. If i only updated on cache miss i would basically always run static list + a few new builds which would bring back the stale builds problem. If i had an api that just spat out x amount of build xmls then that would be ideal and would greatly simplify the logic but right now i'm trying to work with what i have access to while not hammering the build providers with crawling/re-downloading the builds. |
I see, I was misunderstanding. Thought the downloaded builds were just overwriting things each time, not appending to the existing list. Build artifacts feel like a weird way to store this info to me, would a different repo that the action can push to make any more sense? Then we could put the cron job on that one and these actions just pull from there. I'm just spitballing ideas, this seems fine to me if a new repo seems overengineered :) |
Don't know about more sense but i could make it work that way sure. Though handing this pr off is difficult as is as the sources will need to be changed from my repos to the POB org. Making a new repo to store the build list would fix a few problems such as running tests locally requiring a 3rd party service to download latest build list and using the 3rd party action for the artifact download. Though it would require more effort on your side to integrate as a new repo would need to be created the and secrets setup so that the action can push to the other repo. |
Ideally i would want to just use an api that could get me already decoded xml builds since that would simplify a ton a things here but yeah. |
Requires further discussion. |
This pr will be split up into smaller chunks for easier review. |
Description of the problem being solved:
test.yml
action to better support test builds.CONTRIBUTING.md
regarding debugging of tests.ModCache.lua
To simplify the logic i've used my name in in place of what should be
pathofbuildingcommunity
so that will need to be fixed upon merge as i'd like to keep it this way for testing. Getting the org name in lowercase especially in actions proved to be more complicated than i'd like it to be.