-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement viz_publish_service lambda auto republish via geoprocessing service #894
Conversation
@@ -0,0 +1 @@ | |||
{\rtf1} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the purpose of this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remnant from creating a markdown file on Windows. (rich text format)
This readme is a placeholder meant to hold documentation for the GP service.
Calculating stage values from the hydrotables were failing when certain hydrotable values returned a NaN value. Added a catch to ignore those values.
Thanks for the help getting this on UAT!
|
The merge-base changed after approval.
|
UAT test requested for geoprocessing serviceThe geoprocessing service is meant to convert a mapx file to a service definition (.sd) file and place the file in the necessary location on S3. The viz_publish_service lambda publishes the service using that .sd file. Test procedure
Success.sd and publish flag are successfully recreated and service is successfully republished and working Failure.sd and publish flag are NOT successfully recreated and service is successfully republished and working Recovery/rollbackManually restore the .sd file and publish flag in their respective locations in S3 |
This PR is essentially the same as #630 but with a few changes and loose ends to tie up. I believe mostly everything is captured in the list below but I will add additional information as a PR comment as it comes up.
Background
At this point, all services have been moved off of the Viz EC2 instance, and it only performs one task: converting ArcGIS Pro Projects (which we save as MapX documents) into the ArcGIS service definition (SD) files that viz pipelines need to automatically publish services to ArcGIS server, including updating the data sources with the correct database host and authentication information.
The Viz EC2 instance does this by running the
\aws_loosa\deploy\update_data_stores_and_sd_files.py
script as part of theCore\EC2\viz\scripts\viz_ec2_setup.ps1
deployment script, and this is the number one cause of issues during all deployments, as it takes quite a bit of work to dig into the logs and troubleshoot issues on the Viz EC2 machine. Because ESRI is greedy and won't fully allow their Python API to update these map document parameters (or allow use of a limited version of ArcPy in a container that can do it), we've had to get creative on another way to automatically update these map service definitions as part of our automatic pipelines - credit to Justin for initially coming up with the idea of using a geoprocessing service.With Bob's help, I've successfully tested this geoprocessing service in TI, and while questions remain around how this will actually work operationally with the regular pipelines, it seems worth a shot. That said, because it is ESRI, there is a pretty convoluted workflow required to make this happen.
Steps required to finish
Core/LAMBDA/viz_functions/viz_publish_service/services
to the expected location on S3. Currently in ti this iss3://hydrovis-ti-deployment-us-east-1/viz/pro_projects/
. This value is currently hard coded in the lambda_function.py but should be set from an environment variable.Configure ArcGIS Server to use a custom python environment with the AWS Boto3 library installed. Bob did this in TI following the instructions of this ESRI doc. I believe this will need to be baked into the AMI, so this task should be assigned to whoever owns that process. I'm not sure which server of the ArcGIS stack this should be installed on, probably the same one as the print service?After futilely attempting to make a custom python environment work, @shawncrawley suggested avoiding the problem by making asubprocess
call to theaws
command to avoid needing boto3. This worked well. See this commentTesting