-
-
Notifications
You must be signed in to change notification settings - Fork 372
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement persistent build tracking for more intuitive behavior #1714
base: main
Are you sure you want to change the base?
Conversation
This is definitely an early POC of this idea to solicit feedback on the general approach.
Please let me know your high-level thoughts as well as workflows this should support :) |
tl;dr - I like what I see here.
... or, if there's an entirely new file, but existing files haven't changed. I think this will be caught by the code you've got here - but I'm not sure if there's an edge case where a file with an old modification date is moved into a directory will be picked up on all platforms. The real safe option would be a hash of all source files that would be included.
"Change" here gets a bit hairy if the requirement is a reference to a file on disk. Again, we sort of need the hash of referenced sources.
If the first pass is no more than "You've updated X, you may need to recreate" warning message, it would be a major improvement to developer ergonomics. We could then address specific updates to templated content individually - e.g., we could introduce something that updates just AndroidManifest.xml on a permission change (because XML parsing is a solved problem) without addressing the update of the Gradle file on a library dependency change (because that requires a read-write parser for groovy format that I doubt exists for Python).
Yes we do - tomli-w is an existing dependency.
Agreed, especially since it's tied to a specific rollout of a project. If I
I can't think of any other obvious place; if the POC here is any indication, I don't think it's especially onerous. Adding a couple of extra calls to top-level commands isn't an overhead or complexity that concerns me.
As I said at the beginning - this looks like a solid start, broadly in line with what I would have expected to develop if I'd written this myself. A couple of additional edge cases and usages that I noticed:
|
I agree...but I'm a little nervous about apps or requirements with large files. Reading in hundreds or thousands of megabytes and hashing them could be quite slow on some systems...and at least a noticeable pause on fast systems. That suggests an option to disable it will be necessary. Alternatively, we could split the difference and maybe hash a listing of the files and their timestamps.
Whoops! Thanks for reminding me. |
Absolutely agreed. If this is a check that needs to run on every run, it needs to be able to complete very quickly, even on projects with a large number of files, or projects with individually large files.
Yeah - if adding a file doesn't alter the modification date of a directory, a diff/hash on the list of files should be sufficient. |
Git has to solve a similar problem, and I believe the way it works is to only re-check a file's hash if its timestamp or size has changed. |
If I allow the Command to control where the build tracking database lives, then I can have the
Great point; thanks. |
I'd lean towards a hidden folder in the project root (or maybe somewhere in the venv's share/data folder?), rather than trying to cram this into |
d7ebb95
to
3af40d1
Compare
Ok; I'll finalize the location along with the format of the file as I keep working on this. Additionally, I reworked how the Commands call each other today to better support this more intuitive behavior. Previously, the Therefore, Similarly, Furthermore, I've also made |
That definitely looks like an elegant cleanup. The only note I've got is whether the (Also, in true bike shed territory - the specific naming "is_app_updated" seems unwieldy to me. The "is" prefix doesn't add anything, and what would be updating other than an app?)
This is definitely a weird edge case. The intention was to prevent any automated update - that is, updates that happen just because you're running the command. It's only an option on The question that your change introduces: What does Previously, it would update - because you've explicitly requested an update. The new behaviour will prevent the update in both cases, but won't raise any error. At the very least, there's a need for a check to warn the user that they've asked for something contradictory. However, I wonder if maybe the better fix here is to drop the "implicit update" as part of test, and just expect that "normal" testing is |
I'll need to assess all this more fully....but a different perspective may not consider this as confusing. In a lot of CLI apps I've used, they allow for negating previously specified arguments; so, one could consider |
For sure - interpretation matters a lot here. Making the situation impossible is the best option; but if it's unavoidable, there's no good answer - just an edge case you can document and/or warn about. If we're keeping both, I guess I might lean toward "no" being the interpretation on the basis that it's better to be non-destructive when there's ambiguity - but that's a very weakly held opinion. |
After reviewing this and the history more closely, I understand better what you're saying: that is, when To that end, I would argue I am trying to change this paradigm of operation for Briefcase. Currently, after Briefcase initially rolls out the template and completes a build of the bundle, it will not update the bundle for subsequent commands without explicit instruction from the user via one of the command-line switches (unless, of course, the bundle is deleted). With the changes proposed here, Briefcase will instead (strive to) always keep the bundle up-to-date with the sources of truth and will require explicit instruction from the user to run a command with a stale bundle. In many ways, I think this is the most intuitive behavior for Briefcase to assert by default. That is, when users make updates to their project, Briefcase should automatically incorporate those changes and only ignore them with explicit instruction. I think most users are expecting Briefcase to run their current app when they enter Towards this end, instead of getting rid of the Admittedly, I didn't really set out with this thought process per se...but I feel like this inversion of behavior makes sense in the long term. I am interested to hear your thoughts given these explicit user instructions certainly appear intentional...so, I may be overlooking larger principles recommending their use even in the presence of mechanisms to automatically apply them.
FWIW, using
Similarly, using
I think |
I agree; the historical behavior has been mostly driven by the fact that we didn't have an ability to track changes that needed a template-level update.
I agree, with my only qualifying comment being is that the "no update needed" check needs to be quick. We could run update-requirements and update-resources on every run today - but the no-op requirements check still takes a couple of seconds. The metadata-based approach you're working on here should achieve that, AFAICT. It also means that the update command itself is almost unneeded - except for the Xcode/Visual studio (and, I guess, Gradle in Android Studio) case where you're using those environments to run/debug the project.
I forgot that we added that check. It's somewhat of a moot point if explicit |
I was working through the "did any files change in this directory" algorithm today. Ultimately, if we want to know if any file experienced some sort of change, we'll need to track each file and its metadata in the tracking database. (Although, I suppose if we wanted to avoid writing all that information to the tracking database, we could create a hash of the metadata itself for all of the files.) As for the specific pieces of metadata, I'm not actually sure it will be valuable to create a hash of each file? To allow this algorithm to stand a chance to run quickly, we can't verify the hash of the file content hasn't changed each time. So, we're left comparing the pieces of metadata that are quick to retrieve...and if any of those have changed, we can assume that file has been updated. From there, calculating a hash only when other metadata has changed doesn't seem to have much value to me....unless we're going to ignore those metadata changes if the hash still matches....but I'm not sure that's appropriate. So, what I'm thinking at this point is:
I think this accomplishes our goal of detecting changes. Fro something like Git, I think it needs to go one step further to hash the file because it would need to know whether to create a new object in its database to hold the file's current content or not. So, please let me know if you see holes in this algorithm. I think the immediate one that comes to mind is "what if the file changes but the metadata doesn't?" This kinda feels like "what if I create a hash collision?" Well, it's possible...but that's really unlikely to happen in normal workflows...unless that's specifically what you're trying to do. At any rate, the only way to detect this situation would be to calculate a hash...but that brings us full circle to running a hash in the critical path of this algorithm....and we can't... |
That seems fine to me. It will definitely catch the obvious cases, and if there's any common patterns to the non-obvious cases, they should show up soon enough.
Yeah - that's definitely an edge case I think we can live with. The only other thought I've had is to make this someone else's problem: tools like watchdog implement a lot of this functionality, with the benefit that someone else is maintaining it and keeping on top of all the weird edge-cases that exist with filesystems etc. |
I guess not so surprisingly...understanding if something changed inside a directory is becoming full of corner cases. If we just consider the app sources, this should be mostly straightforward since these directories should just contain Python modules and assets for the app. However, the situation becomes much more fraught for local app requirements. For whatever reason, building the sdist updates the modified datetime for the top-level directory of the requirement. So, ok...we can exclude considering the top-level directory, I guess, since what we really care about is the contents anyway. But what about the state of Git for the requirement? If the user runs But this kinda feels like the tip of an iceberg... That said, I think the use of
So, I ended up writing a quick diff utility for our purposes before I properly looked at watchdog...but I wish I realized watchdog has "directory snapshot" support before I did all that. Its ability to snapshot a directory is basically a more battle-hardened version of what I wrote. They also provide a way to directly compare two snapshots for equality. However...it isn't all roses; the "snapshot" that's returned is a pseudo-dictionary of filepaths mapped to their metadata. The metadata, though, is a Therefore, using watchdog, I see two options:
I've implemented the hash method for now. Open to ideas/thoughts. [EDIT] [EDIT EDIT] |
3af40d1
to
1fffd46
Compare
I'm comfortable being a little over-eager on this. As long as the case of "I didn't touch a thing" returns as "no change", I'm OK with an empty git updates or
Also - it involves pickles, which... . Aside from the security implications of objects that preserve executable state, we then have to deal with potential pickle version incompatibilities. Suffice to say I'll go to great lengths to avoid using pickles.
The hash approach definitely sounds sufficient to me. I can't think of any reason we need to do a deep diff - we just need to know if a change has occurred at all. The only edge case I can think of is whether it's sensitive to file ordering at all - i.e., If the OS returns the same file list but in a different order, does that evaluate as a filesystem change? I'm not sure if that's a problem in practice (or even how you'd evaluate if it happens...), but it's worth poking around to confirm.
It might be worth switching local file references to a 2-pass "build wheel/install wheel" approach. The web backend currently does this (because wheels are required for distribution), but building a wheel cache, using a build directory that isn't in the package's source folder should avoid the "filesystem change" issue. |
When I first implemented this, I couldn't understand why it always created a new hash for the directory...even when watchdog did not find changes with its "directory snapshot diff" support. It was, in fact, the ordering of the paths to create the hash; the metadata for each path must be incorporated in to a hash in the same order each time the hash is created. After that, the hash was reliably consistent when a directory hadn't changed. |
cdcf599
to
2f73864
Compare
As a preliminary speed test, I created a |
0.9s for 100k files sounds acceptable to me - most projects should be a lot less than that. What's the timing on a "just the Toga template" project? Just thinking about ways a speed test might be misleading - is that 100k files in a single directory, or split across lots of directories? Directory traversal could impact on speed... |
My main machine reads it in a few hundredths of a second or so.
Just in a single directory. I'll definitely need to consider some more varied scenarios. My hope is that since we're just reading metadata, access to it is comparability optimized by the file system and OS over accessing file content itself....at least for modern systems making repeated calls for the same metadata. |
See #1733 for a related edge case that might need to be detected: has the venv itself changed? The datestamp on the |
2f73864
to
60036ce
Compare
Yeah; I've been thinking about how to detect this as well. Initially, the modiified datetime of the python exe wasn't useful...until I told Along with this, I've implemented the heavy hitters for this so far:
Complications
Open to high-level feedback on what's been completed so far if you're interested. Still plenty of debugging code and refinements necessary, though. Next step is figuring out a system for detecting arbitrary metadata changes while allowing the commands to have different sensitivities to those changes. |
1aa4c72
to
38fbb22
Compare
How sensitive is this to dev commit-level "version change" updates?
So - this might be a terminology problem. Anything in
Would there be any impact to always deferring? It's clearly needed for the Android et al cases, but what is stopping us from deferring the tracking database for the "default" case? Is there an case where those builds need "current" build data? |
None at all; only the base version of the Briefcase version is tracked.
hmm...ok; but is there a requirement that the resources live inside a directory specified in
If we defer tracking each step for all builds, we lose any fidelity for tracking any successful independent tasks for a build failure. For instance, if Briefcase completes task A and B but C fails, deferred tracking wouldn't be able to know this....unless we added an intermediate level of tracking when the tasks are completed but adding to the tracking database is the deferred part I guess.....but then why defer at all? I might have tunnel vision at this point; so, please let me know if you're imagining something else. |
We might want to include the "dev" part (but not the dev number) in the marker that is used here - When Briefcase goes from 0.3.18dev1234 to 0.3.18, we probably want to flag that as notable update.
There's no requirement that they be in The distinction is currently blurred because the default
I guess it comes down to the complexity of having 2 different flavours of tracking to support the Android/Flatpak case. In your example with a failed C, it's obviously preferable that only C is done on the next pass, but I don't have an issue with the next build requiring A and B be repeated if the overhead/complexity of managing a more granular state is high, or managing state in a way that is compatible with the Android/Flatpak case requires 2 significantly different implementations. |
That makes sense. Although, this really only applies to installing app requirements; everything else has Briefcase install it to That said, tracking at the "command level" instead of "command task level" could create a cleaner implementation....insofar as there wouldn't be tracking code littered throughout the implementation of the Command. Instead, tracking could just happen at the beginning and end of each Command. This argument might sway me to track only at successful command completion. Interestingly, though, installing app requirements offers another complication here: when Briefcase install requirements, it first deletes anything that may be already installed; so, if the build command fails, we'd be forced to clear the current tracking and force re-installation of requirements (ir)regardless the next time to ensure the requirements are actually installed. |
3d8dc9a
to
a647d68
Compare
a647d68
to
e5cd08c
Compare
cab691b
to
0a1a661
Compare
69db25d
to
2007db3
Compare
2007db3
to
4202070
Compare
Changes
Related PRs
.briefcase
directory to.gitignore
briefcase-template#112Relevant issues
create
,update
,build
andrun
automatically execute any of the earlier steps which are out of date #807briefcase run
fails afterbriefcase create
when using stub app templates #1729briefcase dev
update dependencies automatically #881PR Checklist: