-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workflow fixes #629
Workflow fixes #629
Conversation
- Fix problem with how enum values for CLI param type are handled with AllocationParadigm - Fixing arg parse dest/param names to match up with keyword arg names where those are used later (composite_config_data_id and workflow in function to handle exec subcommands)
- Fix _generate_request_response receipt of reason and message args to account for sources being None values when successful - Fix use of service_client: it's no longer an async context manager.
Fixing parsing of message to correct handler function (types should map to function of an instance), and improving handler functions for job info and listing requests.
Update apply_auth to optionally raise an exception on failure rather than return False, via a new keyword arg that defaults to False.
Make this have a default value of empty string to allow creating an instance before the secret is available (and then be set later).
Updating SchedulerRequestResponse and SchedulerRequestResponseBody type hinting for job_id param/attribute, since these are strings in Job.
Updating to address NOAA-OWP#628 (scheduler and scheduler service already addressed).
Adding LD_LIBRARY_PATH as needed and fixing issue with numpy appearing to not be present (even though it was) by running within Python virtual directory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had one question that should be super easy to address. Lets get this on the road though. Thanks for making these changes @robertbartel!
@@ -664,6 +664,10 @@ RUN if [ "${NGEN_WITH_PYTHON}" == "ON" ]; then \ | |||
fi | |||
USER ${USER} | |||
|
|||
ENV VIRTUAL_ENV=/dmod/venv | |||
RUN python3 -m venv $VIRTUAL_ENV && pip3 install numpy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it intentional not to activate the venv
after creating it here? So, should this instead be:
RUN python3 -m venv $VIRTUAL_ENV && pip3 install numpy | |
RUN python3 -m venv $VIRTUAL_ENV && source $VIRTUAL_ENV/bin/activate && pip3 install numpy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty sure it is effectively active if VIRTUAL_ENV
is set in the environment (eh, maybe PATH
needs updating too, but that's also done).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that is just used for bookkeeping so that the added deactivate
shell function can remove $VIRTUAL_ENV
from $PATH
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is what an venv/bin/activate
script looks like (on mac):
activate script
# This file must be used with "source bin/activate" *from bash*
# you cannot run it directly
deactivate () {
# reset old environment variables
if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then
PATH="${_OLD_VIRTUAL_PATH:-}"
export PATH
unset _OLD_VIRTUAL_PATH
fi
if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then
PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}"
export PYTHONHOME
unset _OLD_VIRTUAL_PYTHONHOME
fi
# This should detect bash and zsh, which have a hash command that must
# be called to get it to forget past commands. Without forgetting
# past commands the $PATH changes we made may not be respected
if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then
hash -r 2> /dev/null
fi
if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then
PS1="${_OLD_VIRTUAL_PS1:-}"
export PS1
unset _OLD_VIRTUAL_PS1
fi
unset VIRTUAL_ENV
if [ ! "${1:-}" = "nondestructive" ] ; then
# Self destruct!
unset -f deactivate
fi
}
# unset irrelevant variables
deactivate nondestructive
VIRTUAL_ENV="/home/user/docker-py/venv"
export VIRTUAL_ENV
_OLD_VIRTUAL_PATH="$PATH"
PATH="$VIRTUAL_ENV/bin:$PATH"
export PATH
# unset PYTHONHOME if set
# this will fail if PYTHONHOME is set to the empty string (which is bad anyway)
# could use `if (set -u; : $PYTHONHOME) ;` in bash
if [ -n "${PYTHONHOME:-}" ] ; then
_OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}"
unset PYTHONHOME
fi
if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then
_OLD_VIRTUAL_PS1="${PS1:-}"
PS1="(venv) ${PS1:-}"
export PS1
fi
# This should detect bash and zsh, which have a hash command that must
# be called to get it to forget past commands. Without forgetting
# past commands the $PATH changes we made may not be respected
if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then
hash -r 2> /dev/null
fi
@@ -664,6 +664,10 @@ RUN if [ "${NGEN_WITH_PYTHON}" == "ON" ]; then \ | |||
fi | |||
USER ${USER} | |||
|
|||
ENV VIRTUAL_ENV=/dmod/venv | |||
RUN python3 -m venv $VIRTUAL_ENV && pip3 install numpy | |||
ENV PATH="$VIRTUAL_ENV/bin:$PATH" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm, I assume you are adding this to PATH
so venv
is effectively always activated. Is there a reason why we can't use the default python environment or instead add venv
activation in the entrypoint?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically, something weird seems to be going on, probably after NOAA-OWP/ngen#755, that's breaking the image build. See CIROH-UA/NGIAB-HPCInfra#12 and CIROH-UA/NGIAB-CloudInfra#137 for more details on this specifically, as others are running into it also.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for linking those related issues. Seems like something to do with using a non-virtual environment. My main concern is making it really clear that we are using a virtual environment in the image without needing to come and read the Dockerfile
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair point. I've opened #630. I don't really want to move away from the default environment, but it seems necessary. But I'd like to eventually take this out, once the underlying ngen build issues are resolved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this is super weird. Lets just move ahead like you are suggesting and revisit this when we have a fix.
# ... make sure it stays in sync with configure step for netcdf above | ||
ENV NETCDFINC=/usr/include | ||
ENV VIRTUAL_ENV=/dmod/venv | ||
ENV PATH="$VIRTUAL_ENV/bin:$PATH" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similar question as above here.
Several miscellaneous fixes found while testing the DMOD model exec workflow and other related supplementary CLI commands.