-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Submit command #6
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Incompatible version. And `click` v8 lib already has types.
* Explicitly creating subdirs for each command step seems wrong. Is there a better way? * We think data futures (https://parsl.readthedocs.io/en/stable/userguide/futures.html#datafutures) might be the "right" thing to use for chaining commands instead of calling `result()` for each, but that requires us to know in advance what the output filename of each command id. Are there cases where a command could output multiple files?
Flesh out Nox usage docs
Unfortunately, this isn't quite what we were looking for. We actually want to `apply`, not `create`, because with `create` (the imperative method) we need to deal with cleanup of the previous object, or patch it. With `apply` (the declarative method), we specify what we want and kubernetes takes care of the rest.
NOTE: Some imports still need fixing
`curl` complains when the URL doesn't contain a filename that it can use when given a directory with `-O`. `wget` can fetch the remote resource and figure out the filename when given a dir to write to with `-P`
Latest parsl code + added `wget`
Raises an error when variables are unset.
This feels increasingly "wrong". By injecting values into templates representing python code, we don't gain any of the advantages of our usual tooling around types, code formatting, syntax checking, etc. We only find out about e.g., syntax errors and incorrectly formatted function calls (kwarg vs positional, as in this case) until we try to execute the program.
This uses the remote server's name for a file when it isn't available in the url path
trey-stafford
force-pushed
the
submit-command-trey
branch
from
April 30, 2024 18:46
c0e2f4a
to
9bdb6cd
Compare
What do you think of merging everything to main at this stage? |
trey-stafford
force-pushed
the
submit-command-trey
branch
from
April 30, 2024 19:20
d220c1c
to
582c02a
Compare
trey-stafford
force-pushed
the
submit-command-trey
branch
from
April 30, 2024 19:23
328125c
to
a1fb449
Compare
This was referenced Apr 30, 2024
Closed
mfisher87
approved these changes
Apr 30, 2024
Closed
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Submitting the seal tags recipe now works.
There are still some issues:
/data/
and prints the contents of those files. This works OK but there is no real logging around the outputs captured in these logs so it is unclear when the logged messages were generated. The persistent files written to/data/
currently just get appended to when the job is run more than once. Manual cleanup should be replaced by something automated.wget
fetch step will happily re-download the source data file every time the job is submitted (appending a number to the end of the filename). For that matter, none of the output dirs are unique to a given run, so commands that fail without e.g.,--overwrite
will fail on re-runs unless we do manual cleanup.