Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workflow fixes #629

Merged
merged 12 commits into from
May 21, 2024
Merged
17 changes: 16 additions & 1 deletion docker/main/ngen/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -664,6 +664,10 @@ RUN if [ "${NGEN_WITH_PYTHON}" == "ON" ]; then \
fi
USER ${USER}

ENV VIRTUAL_ENV=/dmod/venv
RUN python3 -m venv $VIRTUAL_ENV && pip3 install numpy
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it intentional not to activate the venv after creating it here? So, should this instead be:

Suggested change
RUN python3 -m venv $VIRTUAL_ENV && pip3 install numpy
RUN python3 -m venv $VIRTUAL_ENV && source $VIRTUAL_ENV/bin/activate && pip3 install numpy

Copy link
Contributor Author

@robertbartel robertbartel May 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty sure it is effectively active if VIRTUAL_ENV is set in the environment (eh, maybe PATH needs updating too, but that's also done).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that is just used for bookkeeping so that the added deactivate shell function can remove $VIRTUAL_ENV from $PATH.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is what an venv/bin/activate script looks like (on mac):

activate script
# This file must be used with "source bin/activate" *from bash*
# you cannot run it directly

deactivate () {
    # reset old environment variables
    if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then
        PATH="${_OLD_VIRTUAL_PATH:-}"
        export PATH
        unset _OLD_VIRTUAL_PATH
    fi
    if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then
        PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}"
        export PYTHONHOME
        unset _OLD_VIRTUAL_PYTHONHOME
    fi

    # This should detect bash and zsh, which have a hash command that must
    # be called to get it to forget past commands.  Without forgetting
    # past commands the $PATH changes we made may not be respected
    if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then
        hash -r 2> /dev/null
    fi

    if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then
        PS1="${_OLD_VIRTUAL_PS1:-}"
        export PS1
        unset _OLD_VIRTUAL_PS1
    fi

    unset VIRTUAL_ENV
    if [ ! "${1:-}" = "nondestructive" ] ; then
    # Self destruct!
        unset -f deactivate
    fi
}

# unset irrelevant variables
deactivate nondestructive

VIRTUAL_ENV="/home/user/docker-py/venv"
export VIRTUAL_ENV

_OLD_VIRTUAL_PATH="$PATH"
PATH="$VIRTUAL_ENV/bin:$PATH"
export PATH

# unset PYTHONHOME if set
# this will fail if PYTHONHOME is set to the empty string (which is bad anyway)
# could use `if (set -u; : $PYTHONHOME) ;` in bash
if [ -n "${PYTHONHOME:-}" ] ; then
    _OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}"
    unset PYTHONHOME
fi

if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then
    _OLD_VIRTUAL_PS1="${PS1:-}"
    PS1="(venv) ${PS1:-}"
    export PS1
fi

# This should detect bash and zsh, which have a hash command that must
# be called to get it to forget past commands.  Without forgetting
# past commands the $PATH changes we made may not be respected
if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then
    hash -r 2> /dev/null
fi

ENV PATH="$VIRTUAL_ENV/bin:$PATH"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm, I assume you are adding this to PATH so venv is effectively always activated. Is there a reason why we can't use the default python environment or instead add venv activation in the entrypoint?

Copy link
Contributor Author

@robertbartel robertbartel May 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically, something weird seems to be going on, probably after NOAA-OWP/ngen#755, that's breaking the image build. See CIROH-UA/NGIAB-HPCInfra#12 and CIROH-UA/NGIAB-CloudInfra#137 for more details on this specifically, as others are running into it also.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for linking those related issues. Seems like something to do with using a non-virtual environment. My main concern is making it really clear that we are using a virtual environment in the image without needing to come and read the Dockerfile.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point. I've opened #630. I don't really want to move away from the default environment, but it seems necessary. But I'd like to eventually take this out, once the underlying ngen build issues are resolved.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this is super weird. Lets just move ahead like you are suggesting and revisit this when we have a fix.


RUN cd ${BOOST_ROOT} \
aaraney marked this conversation as resolved.
Show resolved Hide resolved
&& tar -xf boost_tarball.blob --strip 1 \
&& rm boost_tarball.blob \
Expand Down Expand Up @@ -843,7 +847,11 @@ RUN cd ${BOOST_ROOT} \
&& chmod +x ${WORKDIR}/entrypoint.sh

WORKDIR ${WORKDIR}
ENV PATH=${WORKDIR}:${WORKDIR}/bin:$PATH
ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/lib:/usr/local/lib64:/dmod/shared_libs
# This next value is eventually needed for the t-route build ...
# ... make sure it stays in sync with configure step for netcdf above
ENV NETCDFINC=/usr/include
ENV PATH=/dmod/bin:${WORKDIR}:${WORKDIR}/bin:$PATH:/usr/lib64/mpich/bin
ENV NGEN_PART_EXECUTABLE="${PARTITIONER_EXECUTABLE}"
ENTRYPOINT ["entrypoint.sh"]

Expand All @@ -863,6 +871,13 @@ ENV WORKDIR=${WORKDIR}
ENV HYDRA_HOST_FILE=/etc/opt/hosts
ENV PATH=${WORKDIR}:${WORKDIR}/bin:/dmod/bin:${PATH}:/usr/lib64/mpich/bin

ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/lib:/usr/local/lib64:/dmod/shared_libs
# This next value is eventually needed for the t-route build ...
# ... make sure it stays in sync with configure step for netcdf above
ENV NETCDFINC=/usr/include
ENV VIRTUAL_ENV=/dmod/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

similar question as above here.


#RUN cd ./ngen && mkdir ${WORKDIR}/bin && cp cmake_build/ngen ${WORKDIR}/bin && cp -r data ${WORKDIR}/data \
# && cd $WORKDIR && rm -rf ngen boost

Expand Down
16 changes: 8 additions & 8 deletions docker/main/ngen/funcs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,10 @@ init_script_mpi_vars()

init_ngen_executable_paths()
{
NGEN_SERIAL_EXECUTABLE="/ngen/ngen/cmake_build_serial/ngen"
NGEN_PARALLEL_EXECUTABLE="/ngen/ngen/cmake_build_parallel/ngen"
NGEN_SERIAL_EXECUTABLE="/dmod/bin/ngen-serial"
NGEN_PARALLEL_EXECUTABLE="/dmod/bin/ngen-parallel"
# This will be symlinked to the parallel one currently
NGEN_EXECUTABLE="/ngen/ngen/cmake_build/ngen"
NGEN_EXECUTABLE="/dmod/bin/ngen"
}

check_for_dataset_dir()
Expand Down Expand Up @@ -154,27 +154,27 @@ ngen_sanity_checks_and_derived_init()
# Run some sanity checks
# Use complement of valid range like this in a few places to catch non-integer values
if ! [ "${MPI_NODE_COUNT:-1}" -gt 0 ] 2>/dev/null; then
echo "Error: invalid value '${MPI_NODE_COUNT}' given for MPI node count" > 2>&1
>&2 echo "Error: invalid value '${MPI_NODE_COUNT}' given for MPI node count"
exit 1
fi
if ! [ "${WORKER_INDEX:-0}" -ge 0 ] 2>/dev/null; then
echo "Error: invalid value '${WORKER_INDEX}' given for MPI worker index/rank" > 2>&1
>&2 echo "Error: invalid value '${WORKER_INDEX}' given for MPI worker index/rank"
exit 1
fi

# Assume that any of these being present implies the job will run via multiple MPI processes
if [ -n "${MPI_NODE_COUNT:-}" ] || [ -n "${MPI_HOST_STRING:-}" ] || [ -n "${WORKER_INDEX:-}" ]; then
# ... and as such, they all must be present
if [ -z "${MPI_HOST_STRING:-}" ]; then
echo "Error: MPI host string not provided for job that will utilize MPI" > 2>&1
>&2 echo "Error: MPI host string not provided for job that will utilize MPI"
exit 1
fi
if [ -z "${MPI_NODE_COUNT:-}" ]; then
echo "Error: MPI node count not provided for job that will utilize MPI" > 2>&1
>&2 echo "Error: MPI node count not provided for job that will utilize MPI"
exit 1
fi
if [ -z "${WORKER_INDEX:-}" ]; then
echo "Error: MPI worker index not provided for job that will utilize MPI" > 2>&1
>&2 echo "Error: MPI worker index not provided for job that will utilize MPI"
exit 1
fi
# Also, require a partitioning config for any MPI job
Expand Down
22 changes: 11 additions & 11 deletions docker/main/ngen/ngen_entrypoint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,43 +4,43 @@
while [ ${#} -gt 0 ]; do
case "${1}" in
--config-dataset)
CONFIG_DATASET_NAME="${2:?}"
declare -x CONFIG_DATASET_NAME="${2:?}"
shift
;;
--host-string)
MPI_HOST_STRING="${2:?}"
declare -x MPI_HOST_STRING="${2:?}"
shift
;;
--hydrofabric-dataset)
HYDROFABRIC_DATASET_NAME="${2:?}"
declare -x HYDROFABRIC_DATASET_NAME="${2:?}"
shift
;;
--job-id)
JOB_ID="${2:?}"
declare -x JOB_ID="${2:?}"
shift
;;
--node-count)
MPI_NODE_COUNT="${2:?}"
declare -x MPI_NODE_COUNT="${2:?}"
shift
;;
--output-dataset)
OUTPUT_DATASET_NAME="${2:?}"
declare -x OUTPUT_DATASET_NAME="${2:?}"
shift
;;
--partition-dataset)
PARTITION_DATASET_NAME="${2:?}"
declare -x PARTITION_DATASET_NAME="${2:?}"
shift
;;
--worker-index)
WORKER_INDEX="${2:?}"
declare -x WORKER_INDEX="${2:?}"
shift
;;
esac
shift
done

# Get some universally applicable functions and constants
source ./funcs.sh
source /ngen/funcs.sh

ngen_sanity_checks_and_derived_init
init_script_mpi_vars
Expand All @@ -49,8 +49,8 @@ init_ngen_executable_paths
# Move to the output dataset mounted directory
cd ${OUTPUT_DATASET_DIR:?Output dataset directory not defined}
#Needed for routing
if [ ! -e /dmod/dataset/experiment_output ]; then
ln -s $(pwd) /dmod/dataset/experiment_output
if [ ! -e /dmod/datasets/linked_job_output ]; then
ln -s $(pwd) /dmod/datasets/linked_job_output
fi

# We can allow worker index to not be supplied when executing serially
Expand Down
8 changes: 5 additions & 3 deletions python/lib/client/dmod/client/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,11 +48,13 @@ def _create_ngen_based_exec_parser(subcommand_container: Any, parser_name: str,
new_parser = subcommand_container.add_parser(parser_name)
new_parser.add_argument('--partition-config-data-id', dest='partition_cfg_data_id', default=None,
help='Provide data_id for desired partition config dataset.')
paradigms = [p for p in AllocationParadigm]
new_parser.add_argument('--allocation-paradigm',
dest='allocation_paradigm',
type=AllocationParadigm.get_from_name,
choices=[val.name.lower() for val in AllocationParadigm],
choices=paradigms,
default=default_alloc_paradigm,
metavar=f"{{{', '.join(p.name.lower() for p in paradigms)}",
help='Specify job resource allocation paradigm to use.')
new_parser.add_argument('--catchment-ids', dest='catchments', nargs='+', help='Specify catchment subset.')
new_parser.add_argument('--forcings-data-id', dest='forcings_data_id', help='Specify catchment subset.')
Expand All @@ -64,7 +66,7 @@ def _create_ngen_based_exec_parser(subcommand_container: Any, parser_name: str,
help='Model time range ({} to {})'.format(print_date_format, print_date_format))
new_parser.add_argument('hydrofabric_data_id', help='Identifier of dataset of required hydrofabric')
new_parser.add_argument('hydrofabric_uid', help='Unique identifier of required hydrofabric')
new_parser.add_argument('config_data_id', help='Identifier of composite config dataset with required configs')
new_parser.add_argument('composite_config_data_id', help='Identifier of composite config dataset with required configs')
new_parser.add_argument('cpu_count', type=int, help='Provide the desired number of processes for the execution')
new_parser.add_argument('memory', type=int, help='Provide the desired amount of memory (bytes) for the execution')

Expand Down Expand Up @@ -104,7 +106,7 @@ def _handle_exec_command_args(parent_subparsers_container):
command_parser = parent_subparsers_container.add_parser('exec')

# Subparser under the exec command's parser for handling the different job workflows that might be started
workflow_subparsers = command_parser.add_subparsers(dest='workflow_starter')
workflow_subparsers = command_parser.add_subparsers(dest='workflow')
workflow_subparsers.required = True

# Add some parsers to deserialize a request from a JSON string, or ...
Expand Down
2 changes: 1 addition & 1 deletion python/lib/client/dmod/client/_version.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = '0.8.0'
__version__ = '0.8.1'
18 changes: 9 additions & 9 deletions python/lib/client/dmod/client/dmod_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -270,7 +270,7 @@ async def execute_job(self, workflow: str, **kwargs) -> ResultIndicator:
else:
raise ValueError(f"Unsupported job execution workflow {workflow}")

async def job_command(self, command: str, **kwargs) -> ResultIndicator:
async def job_command(self, job_command: str, **kwargs) -> ResultIndicator:
"""
Submit a request that performs a particular job command.

Expand All @@ -283,7 +283,7 @@ async def job_command(self, command: str, **kwargs) -> ResultIndicator:

Parameters
----------
command : str
job_command : str
A string indicating the particular job command to run.
kwargs
Other required/optional parameters as needed/desired for the particular job command to be run.
Expand All @@ -294,20 +294,20 @@ async def job_command(self, command: str, **kwargs) -> ResultIndicator:
An indicator of the results of attempting to run the command.
"""
try:
if command == 'info':
if job_command == 'info':
return await self.job_client.request_job_info(**kwargs)
elif command == 'list':
elif job_command == 'list':
return await self.job_client.request_jobs_list(**kwargs)
elif command == 'release':
elif job_command == 'release':
return await self.job_client.request_job_release(**kwargs)
elif command == 'status':
elif job_command == 'status':
return await self.job_client.request_job_status(**kwargs)
elif command == 'stop':
elif job_command == 'stop':
return await self.job_client.request_job_stop(**kwargs)
else:
raise ValueError(f"Unsupported job command to {self.__class__.__name__}: {command}")
raise ValueError(f"Unsupported job command to {self.__class__.__name__}: {job_command}")
except NotImplementedError:
raise NotImplementedError(f"Supported command {command} not yet implemented by {self.__class__.__name__}")
raise NotImplementedError(f"Supported command {job_command} not yet implemented by {self.__class__.__name__}")

def print_config(self):
print(self.client_config.json(by_alias=True, exclude_none=True, indent=2))
Expand Down
2 changes: 1 addition & 1 deletion python/lib/client/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
license='',
include_package_data=True,
#install_requires=['websockets', 'jsonschema'],vi
install_requires=['dmod-core>=0.16.0', 'websockets>=8.1', 'pydantic>=1.10.8,~=1.10', 'dmod-communication>=0.18.0',
install_requires=['dmod-core>=0.16.0', 'websockets>=8.1', 'pydantic>=1.10.8,~=1.10', 'dmod-communication>=0.19.0',
'dmod-externalrequests>=0.6.0', 'dmod-modeldata>=0.12.0'],
packages=find_namespace_packages(include=['dmod.*'], exclude=['dmod.test'])
)
2 changes: 1 addition & 1 deletion python/lib/communication/dmod/communication/_version.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = '0.18.0'
__version__ = '0.19.0'
8 changes: 7 additions & 1 deletion python/lib/communication/dmod/communication/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -284,14 +284,16 @@ def _prepare_auth_request_payload(self) -> dict:
# TODO: Fix this to not be ... fixed ...
return {'username': 'someone', 'user_secret': 'something'}

async def apply_auth(self, external_request: ExternalRequest) -> bool:
async def apply_auth(self, external_request: ExternalRequest, raise_on_fail: bool = False) -> bool:
"""
Apply appropriate authentication details to this request object, acquiring them first if needed.

Parameters
----------
external_request : ExternalRequest
A request that needs the appropriate session secret applied.
raise_on_fail : bool
Whether to raise a runtime error if unable to acquire a session, which by default is set to ``False``.

Returns
----------
Expand All @@ -301,6 +303,10 @@ async def apply_auth(self, external_request: ExternalRequest) -> bool:
if await self._async_acquire_session():
external_request.session_secret = self._session_secret
return True
elif raise_on_fail:
raise DmodRuntimeError(f"{self.__class__.__name__} was unable to acquire session for "
f"{external_request.__class__.__name__} (current secret value is "
f"{self._session_secret!s}")
else:
return False

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ class ExternalRequest(AbstractInitRequest, ABC):
"""
The base class underlying all types of externally-initiated (and, therefore, authenticated) MaaS system requests.
"""
session_secret: str
session_secret: str = ''

@classmethod
@abstractmethod
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ class SchedulerRequestResponse(Response):

data: Union[SchedulerRequestResponseBody, Dict[None, None], None]

def __init__(self, job_id: Optional[int] = None, output_data_id: Optional[str] = None, data: dict = None, **kwargs):
def __init__(self, job_id: Optional[str] = None, output_data_id: Optional[str] = None, data: dict = None, **kwargs):
# TODO: how to handle if kwargs has success=True, but job_id value (as param or in data) implies success=False

# Create an empty data if not supplied a dict, but only if there is a job_id or output_data_id to insert
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@


class SchedulerRequestResponseBody(Serializable):
job_id: int = UNSUCCESSFUL_JOB
job_id: str = str(UNSUCCESSFUL_JOB)
output_data_id: Optional[str]

def __eq__(self, other: object):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -192,12 +192,3 @@ def setUp(self) -> None:
bad_no_secret = deepcopy(self.base_examples[ManagementAction.CREATE])
bad_no_secret.pop('session_secret')
self.example_data.append(bad_no_secret)

def test_factory_init_from_deserialized_json_6_a(self):
""" Test deserialization for otherwise valid CREATE message data fails if no session secret. """
ex_indx = 6
#expected_action = ManagementAction.CREATE
data = self.example_data[ex_indx]

obj = self.TEST_CLASS_TYPE.factory_init_from_deserialized_json(data)
self.assertIsNone(obj)
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ def test_factory_init_from_deserialized_json_2_g(self):
the expected dictionary value for ``data``, with the ``job_id`` element having the correct value.
"""
obj = NGENRequestResponse.factory_init_from_deserialized_json(self.response_jsons[2])
self.assertEqual(obj.data['job_id'], 42)
self.assertEqual(obj.data['job_id'], '42')

def test_factory_init_from_deserialized_json_2_h(self):
"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ def test_factory_init_from_deserialized_json_2_g(self):
the expected dictionary value for ``data``, with the ``job_id`` element having the correct value.
"""
obj = NWMRequestResponse.factory_init_from_deserialized_json(self.response_jsons[2])
self.assertEqual(obj.data['job_id'], 42)
self.assertEqual(obj.data['job_id'], '42')

def test_factory_init_from_deserialized_json_2_h(self):
"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,10 @@ def setUp(self) -> None:
self.tested_serializeable_type = SchedulerRequestResponse

# Example 0
self.request_strings.append('{"data": {"job_id": 42}, "message": "", "reason": "Job Scheduled", "success": true}')
self.request_jsons.append({"success": True, "reason": "Job Scheduled", "message": "", "data": {"job_id": 42}})
self.request_strings.append('{"data": {"job_id": "42"}, "message": "", "reason": "Job Scheduled", "success": true}')
self.request_jsons.append({"success": True, "reason": "Job Scheduled", "message": "", "data": {"job_id": "42"}})
self.request_objs.append(
SchedulerRequestResponse(success=True, reason="Job Scheduled", message="", data={"job_id": 42}))
SchedulerRequestResponse(success=True, reason="Job Scheduled", message="", data={"job_id": "42"}))

def test_factory_init_from_deserialized_json_0_a(self):
"""
Expand All @@ -31,15 +31,15 @@ def test_job_id_0_a(self):
Assert the value of job_id is as expected for the pre-created example object at the 0th index.
"""
example_index = 0
expected_job_id = 42
expected_job_id = '42'
self.assertEqual(expected_job_id, self.request_objs[example_index].job_id)

def test_job_id_0_b(self):
"""
Assert the value of job_id is as expected for the object deserialized from the example JSON at the 0th index.
"""
example_index = 0
expected_job_id = 42
expected_job_id = '42'
obj = SchedulerRequestResponse.factory_init_from_deserialized_json(self.request_jsons[example_index])
self.assertEqual(expected_job_id, obj.job_id)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -433,15 +433,14 @@ async def handle_request(self, request: Union[JobControlRequest, JobInfoRequest,
session, is_authorized, reason, msg = await self.get_authorized_session(request)
# Generate this regardless as a way to determine what our response type is, but ...
response_if_not_auth = self._generate_request_response(request=request, success=is_authorized,
reason=reason.name, message=msg)
reason=reason.name if reason else '',
message=msg if msg else '')
# ... only use this directly if we fail to be authorized
if not is_authorized:
return response_if_not_auth
else:
async with self.service_client as scheduler_client:
# ... use as just an indicator of the right type otherwise
return await scheduler_client.async_make_request(message=request,
response_type=response_if_not_auth.__class__)
return await self.service_client.async_make_request(message=request,
response_type=response_if_not_auth.__class__)

@property
def service_client(self) -> RequestClient:
Expand Down
Loading
Loading